Exploration. Exploitation, and Engagement in Multi-Armed Bandits with Abandonment
2024
会议录名称JOURNAL OF MACHINE LEARNING RESEARCH
ISSN0
发表状态已发表
DOI10.1109/Allerton49937.2022.9929390
摘要Recommendation algorithms have become increasingly important in many online platforms such as online education, TikTok, YouTube Shorts, advertising platforms, etc. Multiarmed bandit (MAB) [2] is a classic problem which can model these recommendation systems. Each arm in MAB corresponds to a specific type of item in the recommendation system. The recommendation of an item of the $i\text{th}$ type is regarded as a pull of arm $a_{i}$. Taking recommending short videos as an example, each arm $a_{i}$ represents a class of similar videos (e.g. videos from the same dancer). For simplicity, we assume the reward is 1 if the user likes the recommended item and is 0 otherwise. In a traditional MAB problem, the learner can continue to play the arms with the goal of maximizing the average reward, which either assumes a single user stays in the system for a long period of time or assumes the learner is recommending a single item to each user with a large number of users. While this traditional MAB formulation models recommendation systems such as online advertising well, there are new recommendation systems that are significantly different from these traditional models. In these new recommendation systems, such as TikTok or ALEKS, the learner continuously recommends videos/contents to a user, and the user, other than like or dislike the item, may abandon the system if the recommended items cannot engage the user, and come back later. For example, a user watches TikTok or YouTube Shorts for some period of time, where the duration depends on how interesting/engaging the videos are, then leaves the systems, and comes back later.
关键词Video on demand Education Watches Advertising Recommender systems Videos
会议地点Monticello, IL, USA
会议日期27-30 Sept. 2022
URL查看原文
收录类别EI
来源库IEEE
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/251916
专题信息科学与技术学院_PI研究组_刘鑫组
通讯作者Yang, Zixian
作者单位
1.Univ Michigan, Ann Arbor, MI 48109 USA
2.ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
推荐引用方式
GB/T 7714
Yang, Zixian,Liu, Xin,Ying, Lei. Exploration. Exploitation, and Engagement in Multi-Armed Bandits with Abandonment[C],2024.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Yang, Zixian]的文章
[Liu, Xin]的文章
[Ying, Lei]的文章
百度学术
百度学术中相似的文章
[Yang, Zixian]的文章
[Liu, Xin]的文章
[Ying, Lei]的文章
必应学术
必应学术中相似的文章
[Yang, Zixian]的文章
[Liu, Xin]的文章
[Ying, Lei]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 10.1109@Allerton49937.2022.9929390.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。