| |||||||
ShanghaiTech University Knowledge Management System
LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models | |
2024-03 | |
会议录名称 | 2024 ANNUAL CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS |
卷号 | 1 |
页码 | 6473-6486 |
发表状态 | 已发表 |
摘要 | Prompt-based learning is a new language model training paradigm that adapts the Pre-trained Language Models (PLMs) to downstream tasks, which revitalizes the performance benchmarks across various natural language processing (NLP) tasks. Instead of using a fixed prompt template to fine-tune the model, some research demonstrates the effectiveness of searching for the prompt via optimization. Such prompt optimization process of prompt-based learning on PLMs also gives insight into generating adversarial prompts to mislead the model, raising concerns about the adversarial vulnerability of this paradigm. Recent studies have shown that universal adversarial triggers (UATs) can be generated to alter not only the predictions of the target PLMs but also the prediction of corresponding Prompt-based Fine-tuning Models (PFMs) under the prompt-based learning paradigm. However, UATs found in previous works are often unreadable tokens or characters and can be easily distinguished from natural texts with adaptive defenses. In this work, we consider the naturalness of the UATs and develop LinkPrompt, an adversarial attack algorithm to generate UATs by a gradient-based beam search algorithm that not only effectively attacks the target PLMs and PFMs but also maintains the naturalness among the trigger tokens. Extensive results demonstrate the effectiveness of LinkPrompt, as well as the transferability of UATs generated by LinkPrompt to open-sourced Large Language Model (LLM) Llama2 and API-accessed LLM GPT-3.5-turbo. The resource is available at https://github.com/SavannahXu79/LinkPrompt. © 2024 Association for Computational Linguistics. |
会议录编者/会议主办者 | Baidu ; Capital One ; et al. ; Grammarly ; Megagon Labs ; Otter.ai |
关键词 | Computational linguistics Learning algorithms Learning systems Natural language processing systems Down-stream Fine tuning Gradient based Language model Language processing Learning paradigms Model training Natural languages Optimisations Performance |
会议名称 | 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 |
会议地点 | Hybrid, Mexico City, Mexico |
会议日期 | June 16, 2024 - June 21, 2024 |
收录类别 | EI |
语种 | 英语 |
出版者 | Association for Computational Linguistics (ACL) |
EI入藏号 | 20243116770470 |
EI主题词 | Benchmarking |
EI分类号 | 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory ; 723.2 Data Processing and Image Processing ; 723.4.2 Machine Learning |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/352511 |
专题 | 信息科学与技术学院 信息科学与技术学院_硕士生 信息科学与技术学院_博士生 信息科学与技术学院_PI研究组_王雯婕组 |
通讯作者 | Wang WJ(王雯婕) |
作者单位 | ShanghaiTech University, School of Science and Technology |
第一作者单位 | 上海科技大学 |
通讯作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Xu Y,Wang WJ. LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models[C]//Baidu, Capital One, et al., Grammarly, Megagon Labs, Otter.ai:Association for Computational Linguistics (ACL),2024:6473-6486. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
个性服务 |
查看访问统计 |
谷歌学术 |
谷歌学术中相似的文章 |
[Xu Y(徐悦)]的文章 |
[Wang WJ(王雯婕)]的文章 |
百度学术 |
百度学术中相似的文章 |
[Xu Y(徐悦)]的文章 |
[Wang WJ(王雯婕)]的文章 |
必应学术 |
必应学术中相似的文章 |
[Xu Y(徐悦)]的文章 |
[Wang WJ(王雯婕)]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。