TY - GEN
T1 - Personalized Video Fragment Recommendation
AU - Wang, Jiaqi
AU - Kwok, Ricky Y.K.
AU - Ngai, Edith C.H.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In the mass market, users' attention span over video contents is agonizingly short (e.g., 15 seconds for music/entertainment videos, 6 minutes for lecture videos, etc.), from a video producer's or platform provider's point of view. Given the huge amounts of existing and new videos that are significantly longer than such attention spans, a formidable research challenge is to design and implement a system for recommending just the specific fragments within a long video to match the profiles of the users.In this paper, we propose to meet this challenge based on three major insights. First, we propose to apply Self-Attention Blocks in our deep-learning framework to capture the fragment-level contextual effect. Second, we design a Video-Level Representation Module to take video-level preference into consideration when generating recommendations. Third, we propose a simple yet effective loss function for the video fragment recommendation task. Extensive experiments are conducted to evaluate the effectiveness of the proposed method. Experiment results show that our proposed framework outperforms state-of-the-art approaches in both NDCG@K and Recall@K, demonstrating judicious exploitation of fragment-level contextual effect and video-level preference. Moreover, empirical experiments are also conducted to analyze the key components and parameters in the proposed framework.
AB - In the mass market, users' attention span over video contents is agonizingly short (e.g., 15 seconds for music/entertainment videos, 6 minutes for lecture videos, etc.), from a video producer's or platform provider's point of view. Given the huge amounts of existing and new videos that are significantly longer than such attention spans, a formidable research challenge is to design and implement a system for recommending just the specific fragments within a long video to match the profiles of the users.In this paper, we propose to meet this challenge based on three major insights. First, we propose to apply Self-Attention Blocks in our deep-learning framework to capture the fragment-level contextual effect. Second, we design a Video-Level Representation Module to take video-level preference into consideration when generating recommendations. Third, we propose a simple yet effective loss function for the video fragment recommendation task. Extensive experiments are conducted to evaluate the effectiveness of the proposed method. Experiment results show that our proposed framework outperforms state-of-the-art approaches in both NDCG@K and Recall@K, demonstrating judicious exploitation of fragment-level contextual effect and video-level preference. Moreover, empirical experiments are also conducted to analyze the key components and parameters in the proposed framework.
KW - Collaborative Filtering
KW - Recommendation System
KW - Self-Attention
UR - http://www.scopus.com/inward/record.url?scp=85158893309&partnerID=8YFLogxK
U2 - 10.1109/WI-IAT55865.2022.00036
DO - 10.1109/WI-IAT55865.2022.00036
M3 - Conference contribution
AN - SCOPUS:85158893309
T3 - Proceedings - 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2022
SP - 199
EP - 206
BT - Proceedings - 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2022
A2 - Zhao, Jiashu
A2 - Fan, Yixing
A2 - Bagheri, Ebrahim
A2 - Fuhr, Norbert
A2 - Takasu, Atsuhiro
T2 - 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT 2022
Y2 - 17 November 2022 through 20 November 2022
ER -