ShanghaiTech University Knowledge Management System
HeroMaker: Human-centric Video Editing with Motion Priors | |
2024-10-28 | |
会议录名称 | MM 2024 - PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA |
页码 | 3761-3770 |
发表状态 | 已发表 |
DOI | 10.1145/3664647.3681147 |
摘要 | Video generation and editing, particularly human-centric video editing, has seen a surge of interest in its potential to create immersive and dynamic content. A fundamental challenge is ensuring temporal coherence and visual harmony across frames, especially in handling large-scale human motion and maintaining consistency over long sequences. The previous methods, such as zero-shot text-to-video methods with diffusion model, struggle with flickering and length limitations. In contrast, methods employing Video-2D representations grapple with accurately capturing complex structural relationships in large-scale human motion. Simultaneously, some patterns on the human body appear intermittently throughout the video, posing a knotty problem in identifying visual correspondence. To address the above problems, we present HeroMaker. This human-centric video editing framework manipulates the person's appearance within the input video and achieves consistent results across frames. Specifically, we propose to learn the motion priors, which represent the correspondences between dual canonical fields and each video frame, by leveraging the body mesh-based human motion warping and neural deformation-based margin refinement in the video reconstruction framework to ensure the semantic correctness of canonical fields. HeroMaker performs human-centric video editing by manipulating the dual canonical fields and combining them with motion priors to synthesize temporally coherent and visually plausible results. Comprehensive experiments demonstrate that our approach surpasses existing methods regarding temporal consistency, visual quality, and semantic coherence. © 2024 ACM. |
会议录编者/会议主办者 | ACM SIGMM |
关键词 | Footage counters Human engineering Motion picture editing machines Video analysis Diffusion model Dynamic content Human motions Human-centric Human-centric video editing Immersive Large-scales Motion priors Video editing Video generation |
会议名称 | 32nd ACM International Conference on Multimedia, MM 2024 |
会议地点 | Melbourne, VIC, Australia |
会议日期 | October 28, 2024 - November 1, 2024 |
收录类别 | EI |
语种 | 英语 |
出版者 | Association for Computing Machinery, Inc |
EI入藏号 | 20244817417605 |
EI主题词 | Visual communication |
EI分类号 | 101.5 ; 1106.3.1 ; 716.4 Television Systems and Equipment ; 742.2 Photographic and Video Equipment |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/455186 |
专题 | 信息科学与技术学院_硕士生 信息科学与技术学院_PI研究组_高盛华组 信息科学与技术学院_博士生 |
通讯作者 | Gao, Shenghua |
作者单位 | 1.ShanghaiTech University, Shanghai, China; 2.The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; 3.UniDT Co. Ltd, Shanghai, China; 4.The University of Hong Kong, Hong Kong, Hong Kong; 5.HKU Shanghai Advanced Computing and Intelligent Technology Research Institute, Shanghai, China |
第一作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Liu, Shiyu,Zhao, Zibo,Zhi, Yihao,et al. HeroMaker: Human-centric Video Editing with Motion Priors[C]//ACM SIGMM:Association for Computing Machinery, Inc,2024:3761-3770. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。