Using Augmented Small Multimodal Models to Guide Large Language Models for Multimodal Relation Extraction

He, Wentao and Ma, Hanjie and Li, Shaohua and Dong, Hui and Zhang, Haixiang and Feng, Jie (2023) Using Augmented Small Multimodal Models to Guide Large Language Models for Multimodal Relation Extraction. Applied Sciences, 13 (22). p. 12208. ISSN 2076-3417

[thumbnail of applsci-13-12208.pdf] Text
applsci-13-12208.pdf - Published Version

Download (2MB)

Abstract

Multimodal Relation Extraction (MRE) is a core task for constructing Multimodal Knowledge images (MKGs). Most current research is based on fine-tuning small-scale single-modal image and text pre-trained models, but we find that image-text datasets from network media suffer from data scarcity, simple text data, and abstract image information, which requires a lot of external knowledge for supplementation and reasoning. We use Multimodal Relation Data augmentation (MRDA) to address the data scarcity problem in MRE, and propose a Flexible Threshold Loss (FTL) to handle the imbalanced entity pair distribution and long-tailed classes. After obtaining prompt information from the small model as a guide model, we employ a Large Language Model (LLM) as a knowledge engine to acquire common sense and reasoning abilities. Notably, both stages of our framework are flexibly replaceable, with the first stage adapting to multimodal related classification tasks for small models, and the second stage replaceable by more powerful LLMs. Through experiments, our EMRE2llm model framework achieves state-of-the-art performance on the challenging MNRE dataset, reaching an 82.95% F1 score on the test set.

Item Type: Article
Subjects: Archive Paper Guardians > Biological Science
Depositing User: Unnamed user with email support@archive.paperguardians.com
Date Deposited: 11 Nov 2023 06:19
Last Modified: 11 Nov 2023 06:19
URI: http://archives.articleproms.com/id/eprint/2241

Actions (login required)

View Item
View Item