Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data With Soft Alignment
2024-06-09
状态已发表
摘要

Multimodal fusion breaks through the barriers between diverse modalities and has already yielded numerous impressive performances. However, in various specialized fields, it is struggling to obtain sufficient alignment data for the training process, which seriously limits the use of previously elegant models. Thus, semi-supervised learning attempts to achieve multimodal alignment with fewer matched pairs but traditional methods like pseudo-labeling are difficult to apply in domains with no label information. To address these problems, we transform semi-supervised multimodal alignment into a manifold matching problem and propose a new method based on CLIP, named Gentle-CLIP. Specifically, we design a novel semantic density distribution loss to explore implicit semantic alignment information from unpaired multimodal data by constraining the latent representation distribution with fine granularity, thus eliminating the need for numerous strictly matched pairs. Meanwhile, we introduce multi-kernel maximum mean discrepancy as well as self-supervised contrastive loss to pull separate modality distributions closer and enhance the stability of the representation distribution. In addition, the contrastive loss used in CLIP is employed on the supervised matched data to prevent negative optimization. Extensive experiments conducted on a range of tasks in various fields, including protein, remote sensing, and the general vision-language field, demonstrate the effectiveness of our proposed Gentle-CLIP.

DOIarXiv:2406.05766
相关网址查看原文
出处Arxiv
WOS记录号PPRN:89268672
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Interdisciplinary Applications ; Computer Science, Software Engineering
文献类型预印本
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/395940
专题信息科学与技术学院_硕士生
通讯作者Li, Stan Z.
作者单位
1.Natl Univ Def Technol, Qinhuangdao, Peoples R China
2.Westlake Univ, Hangzhou, Peoples R China
3.Shanghai Tech Univ, Shanghai, Peoples R China
推荐引用方式
GB/T 7714
Song, Zijia,Zang, Zelin,Wang, Yelin,et al. Gentle-CLIP: Exploring Aligned Semantic In Low-Quality Multimodal Data With Soft Alignment. 2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Song, Zijia]的文章
[Zang, Zelin]的文章
[Wang, Yelin]的文章
百度学术
百度学术中相似的文章
[Song, Zijia]的文章
[Zang, Zelin]的文章
[Wang, Yelin]的文章
必应学术
必应学术中相似的文章
[Song, Zijia]的文章
[Zang, Zelin]的文章
[Wang, Yelin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。