TY - JOUR
T1 - Multimodal fusion framework based on knowledge graph for personalized recommendation
AU - Wang, Jingjing
AU - Xie, Haoran
AU - Zhang, Siyu
AU - Qin, S. Joe
AU - Tao, Xiaohui
AU - Wang, Fu Lee
AU - Xu, Xiaoliang
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/4/5
Y1 - 2025/4/5
N2 - Knowledge Graphs (KGs), which contain a wealth of knowledge, have been commonly employed in recommendation systems as a valuable knowledge-driven tool for supporting high-quality representations. To further enhance the model's ability to understand the real world, Multimodal Knowledge Graphs (MKGs) are proposed to extract rich knowledge and facts among objects from text and visual content. However, existing MKG-based methods primarily focus on the reasoning relationships between entities by utilizing multimodal information as auxiliary data in the KG while overlooking the interactions between modalities. In this paper, we propose a Multimodal fusion framework based on Knowledge Graph for personalized Recommendation (Multi-KG4Rec) to address these limitations. Specifically, we systematically analyze the shortcomings of existing multimodal graph construction methods. To this end, we propose a modal fusion module to extract the user modal preference at a fine-grained level. Furthermore, we conduct extensive experiments on two real-world datasets from different domains to evaluate the performance of our model, and the results demonstrate the efficiency of the Multi-KG4Rec.
AB - Knowledge Graphs (KGs), which contain a wealth of knowledge, have been commonly employed in recommendation systems as a valuable knowledge-driven tool for supporting high-quality representations. To further enhance the model's ability to understand the real world, Multimodal Knowledge Graphs (MKGs) are proposed to extract rich knowledge and facts among objects from text and visual content. However, existing MKG-based methods primarily focus on the reasoning relationships between entities by utilizing multimodal information as auxiliary data in the KG while overlooking the interactions between modalities. In this paper, we propose a Multimodal fusion framework based on Knowledge Graph for personalized Recommendation (Multi-KG4Rec) to address these limitations. Specifically, we systematically analyze the shortcomings of existing multimodal graph construction methods. To this end, we propose a modal fusion module to extract the user modal preference at a fine-grained level. Furthermore, we conduct extensive experiments on two real-world datasets from different domains to evaluate the performance of our model, and the results demonstrate the efficiency of the Multi-KG4Rec.
KW - Knowledge graphs
KW - Multimodal fusion framework
KW - Recommender system
UR - http://www.scopus.com/inward/record.url?scp=85214285894&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2024.126308
DO - 10.1016/j.eswa.2024.126308
M3 - Article
AN - SCOPUS:85214285894
SN - 0957-4174
VL - 268
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 126308
ER -