TY - JOUR
T1 - Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answering
AU - Li, Yong
AU - Yang, Qihao
AU - Wang, Fu Lee
AU - Lee, Lap Kei
AU - Qu, Yingying
AU - Hao, Tianyong
N1 - Publisher Copyright:
© 2023
PY - 2023/10
Y1 - 2023/10
N2 - Insufficient training data is a common barrier to effectively learn multimodal information interactions and question semantics in existing medical Visual Question Answering (VQA) models. This paper proposes a new Asymmetric Cross Modal Attention network called ACMA, which constructs an image-guided attention and a question-guided attention to improve multimodal interactions from insufficient data. In addition, a Semantic Understanding Auxiliary (SUA) in the question-guided attention is newly designed to learn rich semantic embeddings for improving model performance on question understanding by integrating word-level and sentence-level information. Moreover, we propose a new data augmentation method called Multimodal Augmented Mixup (MAM) to train the ACMA, denoted as ACMA-MAM. The MAM incorporates various data augmentations and a vanilla mixup strategy to generate more non-repetitive data, which avoids time-consuming artificial data annotations and improves model generalization capability. Our ACMA-MAM outperforms state-of-the-art models on three publicly accessible medical VQA datasets (VQA-Rad, VQA-Slake, and PathVQA) with accuracies of 76.14 %, 83.13 %, and 53.83 % respectively, achieving improvements of 2.00 %, 1.32 %, and 1.59 % accordingly. Moreover, our model achieves F1 scores of 78.33 %, 82.83 %, and 51.86 %, surpassing the state-of-the-art models by 2.80 %, 1.15 %, and 1.37 % respectively.
AB - Insufficient training data is a common barrier to effectively learn multimodal information interactions and question semantics in existing medical Visual Question Answering (VQA) models. This paper proposes a new Asymmetric Cross Modal Attention network called ACMA, which constructs an image-guided attention and a question-guided attention to improve multimodal interactions from insufficient data. In addition, a Semantic Understanding Auxiliary (SUA) in the question-guided attention is newly designed to learn rich semantic embeddings for improving model performance on question understanding by integrating word-level and sentence-level information. Moreover, we propose a new data augmentation method called Multimodal Augmented Mixup (MAM) to train the ACMA, denoted as ACMA-MAM. The MAM incorporates various data augmentations and a vanilla mixup strategy to generate more non-repetitive data, which avoids time-consuming artificial data annotations and improves model generalization capability. Our ACMA-MAM outperforms state-of-the-art models on three publicly accessible medical VQA datasets (VQA-Rad, VQA-Slake, and PathVQA) with accuracies of 76.14 %, 83.13 %, and 53.83 % respectively, achieving improvements of 2.00 %, 1.32 %, and 1.59 % accordingly. Moreover, our model achieves F1 scores of 78.33 %, 82.83 %, and 51.86 %, surpassing the state-of-the-art models by 2.80 %, 1.15 %, and 1.37 % respectively.
KW - Cross modal attention
KW - Data augmentation
KW - Medical Visual Question Answering
KW - Mixup
KW - Multimodal interaction
UR - http://www.scopus.com/inward/record.url?scp=85171564361&partnerID=8YFLogxK
U2 - 10.1016/j.artmed.2023.102667
DO - 10.1016/j.artmed.2023.102667
M3 - Article
C2 - 37783542
AN - SCOPUS:85171564361
SN - 0933-3657
VL - 144
JO - Artificial Intelligence in Medicine
JF - Artificial Intelligence in Medicine
M1 - 102667
ER -