ShanghaiTech University Knowledge Management System
Generative Negative Text Replay for Continual Vision-Language Pretraining | |
2022 | |
会议录名称 | LECTURE NOTES IN COMPUTER SCIENCE (INCLUDING SUBSERIES LECTURE NOTES IN ARTIFICIAL INTELLIGENCE AND LECTURE NOTES IN BIOINFORMATICS) |
ISSN | 0302-9743 |
卷号 | 13696 LNCS |
页码 | 22-38 |
发表状态 | 已发表 |
DOI | 10.1007/978-3-031-20059-5_2 |
摘要 | Vision-language pre-training (VLP) has attracted increasing attention recently. With a large amount of image-text pairs, VLP models trained with contrastive loss have achieved impressive performance in various tasks, especially the zero-shot generalization on downstream datasets. In practical applications, however, massive data are usually collected in a streaming fashion, requiring VLP models to continuously integrate novel knowledge from incoming data and retain learned knowledge. In this work, we focus on learning a VLP model with sequential chunks of image-text pair data. To tackle the catastrophic forgetting issue in this multi-modal continual learning setting, we first introduce pseudo text replay that generates hard negative texts conditioned on the training images in memory, which not only better preserves learned knowledge but also improves the diversity of negative samples in the contrastive loss. Moreover, we propose multi-modal knowledge distillation between images and texts to align the instance-wise prediction between old and new models. We incrementally pre-train our model on the both instance and class incremental splits of Conceptual Caption dataset, and evaluate the model on zero-shot image classification and image-text retrieval tasks. Our method consistently outperforms the existing baselines with a large margin, which demonstrates its superiority. Notably, we realize an average performance boost of 4.60% on image-classification downstream datasets for class incremental split. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG. |
关键词 | Classification (of information) Image classification Image enhancement Large dataset Text processing Zero-shot learning Continual learning Down-stream Image texts Images classification Large amounts Multi-modal Performance Pre-training Training model Vision-language pretraining |
会议名称 | 17th European Conference on Computer Vision, ECCV 2022 |
出版地 | GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND |
会议地点 | Tel Aviv, Israel |
会议日期 | October 23, 2022 - October 27, 2022 |
URL | 查看原文 |
收录类别 | EI ; CPCI-S |
语种 | 英语 |
资助项目 | Shanghai Science and Technology Program[21010502700] |
WOS研究方向 | Computer Science ; Imaging Science & Photographic Technology |
WOS类目 | Computer Science, Artificial Intelligence ; Imaging Science & Photographic Technology |
WOS记录号 | WOS:000903751800002 |
出版者 | Springer Science and Business Media Deutschland GmbH |
EI入藏号 | 20224813182949 |
EI主题词 | Distillation |
EISSN | 1611-3349 |
EI分类号 | 716.1 Information Theory and Signal Processing ; 723.2 Data Processing and Image Processing ; 802.3 Chemical Operations ; 903.1 Information Sources and Analysis ; 903.3 Information Retrieval and Use |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/272837 |
专题 | 信息科学与技术学院_PI研究组_何旭明组 信息科学与技术学院_博士生 |
通讯作者 | Xu, Hang; He, Xuming |
作者单位 | 1.ShanghaiTech Univ, Shanghai, Peoples R China 2.Chinese Acad Sci, Shanghai Inst Microsyst & Informat Technol, Beijing, Peoples R China 3.Univ Chinese Acad Sci, Beijing, Peoples R China 4.Huawei Noahs Ark Lab, Hong Kong, Peoples R China 5.Katholieke Univ Leuven, Leuven, Belgium 6.Shanghai Engn Res Ctr Intelligent Vis & Imaging, Shanghai, Peoples R China |
第一作者单位 | 上海科技大学 |
通讯作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Yan, Shipeng,Hong, Lanqing,Xu, Hang,et al. Generative Negative Text Replay for Continual Vision-Language Pretraining[C]. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND:Springer Science and Business Media Deutschland GmbH,2022:22-38. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。