TY - JOUR
T1 - Contrastive learning–guided multi-meta attention network for breast ultrasound video diagnosis
AU - Huang, Xiaoyang
AU - Lin, Zhi
AU - Huang, Shaohui
AU - Wang, Fu Lee
AU - Chan, Moon Tong
AU - Wang, Liansheng
N1 - Publisher Copyright:
Copyright © 2022 Huang, Lin, Huang, Wang, Chan and Wang.
PY - 2022/10/24
Y1 - 2022/10/24
N2 - Breast cancer is the most common cause of cancer death in women. Early screening and treatment can effectively improve the success rate of treatment. Ultrasound imaging technology, as the preferred modality for breast cancer screening, provides an essential reference for early diagnosis. Existing computer-aided ultrasound imaging diagnostic techniques mainly rely on the selected key frames for breast cancer lesion diagnosis. In this paper, we first collected and annotated a dataset of ultrasound video sequences of 268 cases of breast lesions. Moreover, we propose a contrastive learning–guided multi-meta attention network (CLMAN) by combining a deformed feature extraction module and a multi-meta attention module to address breast lesion diagnosis in ultrasound sequence. The proposed feature extraction module can autonomously acquire key information of the feature map in the spatial dimension, whereas the designed multi-meta attention module is dedicated to effective information aggregation in the temporal dimension. In addition, we utilize a contrast learning strategy to alleviate the problem of high imaging variability within ultrasound lesion videos. The experimental results on our collected dataset show that our CLMAN significantly outperforms existing advanced methods for video classification.
AB - Breast cancer is the most common cause of cancer death in women. Early screening and treatment can effectively improve the success rate of treatment. Ultrasound imaging technology, as the preferred modality for breast cancer screening, provides an essential reference for early diagnosis. Existing computer-aided ultrasound imaging diagnostic techniques mainly rely on the selected key frames for breast cancer lesion diagnosis. In this paper, we first collected and annotated a dataset of ultrasound video sequences of 268 cases of breast lesions. Moreover, we propose a contrastive learning–guided multi-meta attention network (CLMAN) by combining a deformed feature extraction module and a multi-meta attention module to address breast lesion diagnosis in ultrasound sequence. The proposed feature extraction module can autonomously acquire key information of the feature map in the spatial dimension, whereas the designed multi-meta attention module is dedicated to effective information aggregation in the temporal dimension. In addition, we utilize a contrast learning strategy to alleviate the problem of high imaging variability within ultrasound lesion videos. The experimental results on our collected dataset show that our CLMAN significantly outperforms existing advanced methods for video classification.
KW - breast lesion
KW - contrastive learning
KW - multi-meta attention network
KW - ultrasound sequence
KW - video classification
UR - http://www.scopus.com/inward/record.url?scp=85141973787&partnerID=8YFLogxK
U2 - 10.3389/fonc.2022.952457
DO - 10.3389/fonc.2022.952457
M3 - Article
AN - SCOPUS:85141973787
VL - 12
JO - Frontiers in Oncology
JF - Frontiers in Oncology
M1 - 952457
ER -