MULTIMODAL FUSION NETWORK WITH LATENT TOPIC MEMORY FOR RUMOR DETECTION

Jiaxin Chen, Zekai Wu, Zhenguo Yang, Haoran Xie, Fu Lee Wang, Wenyin Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Citations (Scopus)

Abstract

In this paper, we propose a multimodal fusion network (termed as MFN) to integrate the text and image data from social media for rumor detection. Given the multimodal features, MFN exploits self-attentive fusion (SAF) mechanism to conduct feature-level fusion by assigning corresponding weights to the complementary modalities. In particular, the textual features are combined with the fused features in a skip-connection manner, as textual features tend to be more distinguishable compared with visual features. Furthermore, MFN introduces latent topic memory (LTM) to store the semantic information about rumor and non-rumor events, benefiting to the identification of the upcoming posts. Extensive experiments conducted on two public datasets show that the proposed MFN outperforms the state-of-the-art approaches.

Original languageEnglish
Title of host publication2021 IEEE International Conference on Multimedia and Expo, ICME 2021
ISBN (Electronic)9781665438643
DOIs
Publication statusPublished - 2021
Event2021 IEEE International Conference on Multimedia and Expo, ICME 2021 - Shenzhen, China
Duration: 5 Jul 20219 Jul 2021

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2021 IEEE International Conference on Multimedia and Expo, ICME 2021
Country/TerritoryChina
CityShenzhen
Period5/07/219/07/21

Keywords

  • Multimodal fusion
  • Rumor detection
  • Self-attentive

Fingerprint

Dive into the research topics of 'MULTIMODAL FUSION NETWORK WITH LATENT TOPIC MEMORY FOR RUMOR DETECTION'. Together they form a unique fingerprint.

Cite this