HeroMaker: Human-centric Video Editing with Motion Priors
2024-10-28
会议录名称MM 2024 - PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA
页码3761-3770
发表状态已发表
DOI10.1145/3664647.3681147
摘要

Video generation and editing, particularly human-centric video editing, has seen a surge of interest in its potential to create immersive and dynamic content. A fundamental challenge is ensuring temporal coherence and visual harmony across frames, especially in handling large-scale human motion and maintaining consistency over long sequences. The previous methods, such as zero-shot text-to-video methods with diffusion model, struggle with flickering and length limitations. In contrast, methods employing Video-2D representations grapple with accurately capturing complex structural relationships in large-scale human motion. Simultaneously, some patterns on the human body appear intermittently throughout the video, posing a knotty problem in identifying visual correspondence. To address the above problems, we present HeroMaker. This human-centric video editing framework manipulates the person's appearance within the input video and achieves consistent results across frames. Specifically, we propose to learn the motion priors, which represent the correspondences between dual canonical fields and each video frame, by leveraging the body mesh-based human motion warping and neural deformation-based margin refinement in the video reconstruction framework to ensure the semantic correctness of canonical fields. HeroMaker performs human-centric video editing by manipulating the dual canonical fields and combining them with motion priors to synthesize temporally coherent and visually plausible results. Comprehensive experiments demonstrate that our approach surpasses existing methods regarding temporal consistency, visual quality, and semantic coherence. © 2024 ACM.

会议录编者/会议主办者ACM SIGMM
关键词Footage counters Human engineering Motion picture editing machines Video analysis Diffusion model Dynamic content Human motions Human-centric Human-centric video editing Immersive Large-scales Motion priors Video editing Video generation
会议名称32nd ACM International Conference on Multimedia, MM 2024
会议地点Melbourne, VIC, Australia
会议日期October 28, 2024 - November 1, 2024
收录类别EI
语种英语
出版者Association for Computing Machinery, Inc
EI入藏号20244817417605
EI主题词Visual communication
EI分类号101.5 ; 1106.3.1 ; 716.4 Television Systems and Equipment ; 742.2 Photographic and Video Equipment
原始文献类型Conference article (CA)
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/455186
专题信息科学与技术学院_硕士生
信息科学与技术学院_PI研究组_高盛华组
信息科学与技术学院_博士生
通讯作者Gao, Shenghua
作者单位
1.ShanghaiTech University, Shanghai, China;
2.The Chinese University of Hong Kong (Shenzhen), Shenzhen, China;
3.UniDT Co. Ltd, Shanghai, China;
4.The University of Hong Kong, Hong Kong, Hong Kong;
5.HKU Shanghai Advanced Computing and Intelligent Technology Research Institute, Shanghai, China
第一作者单位上海科技大学
第一作者的第一单位上海科技大学
推荐引用方式
GB/T 7714
Liu, Shiyu,Zhao, Zibo,Zhi, Yihao,et al. HeroMaker: Human-centric Video Editing with Motion Priors[C]//ACM SIGMM:Association for Computing Machinery, Inc,2024:3761-3770.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Shiyu]的文章
[Zhao, Zibo]的文章
[Zhi, Yihao]的文章
百度学术
百度学术中相似的文章
[Liu, Shiyu]的文章
[Zhao, Zibo]的文章
[Zhi, Yihao]的文章
必应学术
必应学术中相似的文章
[Liu, Shiyu]的文章
[Zhao, Zibo]的文章
[Zhi, Yihao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。