Multi-Modality is All You Need for Transferable Recommender Systems
2024
会议录名称PROCEEDINGS - INTERNATIONAL CONFERENCE ON DATA ENGINEERING
ISSN1084-4627
页码5008-5021
发表状态已发表
DOI10.1109/ICDE60146.2024.00380
摘要ID-based Recommender Systems (RecSys), where each item is assigned a unique identifier and subsequently converted into an embedding vector, have dominated the de-signing of RecSys. Though prevalent, such ID-based paradigm is not suitable for developing transferable RecSys and is also susceptible to the cold -start issue. In this paper, we unleash the boundaries of the ID-based paradigm and propose a Pure Multi-Modality based Recommender system (PMMRec), which relies solely on the multi-modal contents of the items (e.g., texts and images) and learns transition patterns general enough to transfer across domains and platforms. Specifically, we design a plug-and-play framework architecture consisting of multi-modal item encoders, a fusion module, and a user encoder. To align the cross-modal item representations, we propose a novel next-item enhanced cross-modal contrastive learning objective, which is equipped with both inter-and intra-modality negative samples and explicitly incorporates the transition patterns of user behaviors into the item encoders. To ensure the robustness of user representations, we propose a novel noised item detection objective and a robustness-aware contrastive learning objective, which work together to denoise user sequences in a self-supervised manner. PMMRec is designed to be loosely coupled, so after being pre-trained on the source data, each component can be transferred alone, or in conjunction with other components, allowing PMMRec to achieve versatility under both multi-modality and single-modality transfer learning settings. Extensive experiments on 4 sources and 10 target datasets demonstrate that PMMRec surpasses the state-of-the-art recommenders in both recommendation performance and transferability. Our code and dataset is available at: https://github.com/ICDE24IPMMRec. © 2024 IEEE.
关键词Behavioral research Learning systems Signal encoding Cross-modal ID-based Learning objectives Multi-modal Multi-modal learning Multi-modality Self-supervised learning Transfer learning Transition patterns Unique identifiers Recommender System Multi-modal Learning Transfer Learning Self-supervised Learning
会议名称40th IEEE International Conference on Data Engineering, ICDE 2024
会议地点Utrecht, Netherlands
会议日期May 13, 2024 - May 17, 2024
URL查看原文
收录类别EI
语种英语
资助项目NSF of China[
WOS类目Computer Science, Information Systems
WOS记录号PPRN:86646675
出版者IEEE Computer Society
EI入藏号20243216830584
EI主题词Recommender systems
EISSN2375-0286
EI分类号461.4 Ergonomics and Human Factors Engineering ; 716.1 Information Theory and Signal Processing ; 723.5 Computer Applications ; 971 Social Sciences
原始文献类型Conference article (CA)
来源库IEEE
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/411262
专题信息科学与技术学院_硕士生
通讯作者Guo, Qi; Yuan, Fajie
作者单位
1.ShanghaiTech University, Shanghai, China
2.Shanghai Innovation Center for Processor Technologies, SHIC, China
3.Westlake University, Hangzhou, China
4.School of Computer Science and Technology, Soochow University, Suzhou, China
5.Institute of Computing Technology, State Key Lab of Processors, Chinese Academy of Sciences, Beijing, China
6.The Hong Kong University of Science and Technology, Hong Kong, Hong Kong
第一作者单位上海科技大学
第一作者的第一单位上海科技大学
推荐引用方式
GB/T 7714
Li, Youhua,Du, Hanwen,Ni, Yongxin,et al. Multi-Modality is All You Need for Transferable Recommender Systems[C]:IEEE Computer Society,2024:5008-5021.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Li, Youhua]的文章
[Du, Hanwen]的文章
[Ni, Yongxin]的文章
百度学术
百度学术中相似的文章
[Li, Youhua]的文章
[Du, Hanwen]的文章
[Ni, Yongxin]的文章
必应学术
必应学术中相似的文章
[Li, Youhua]的文章
[Du, Hanwen]的文章
[Ni, Yongxin]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 10.1109@ICDE60146.2024.00380.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。