ShanghaiTech University Knowledge Management System
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation | |
2024-09-11 | |
会议录名称 | ISSTA 2024 - PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS |
页码 | 578-589 |
DOI | 10.1145/3650212.3680304 |
摘要 | Large Language Models (LLMs) have showcased their remarkable capabilities in diverse domains, encompassing natural language understanding, translation, and even code generation. The potential for LLMs to generate harmful content is a significant concern. This risk necessitates rigorous testing and comprehensive evaluation of LLMs to ensure safe and responsible use. However, extensive testing of LLMs requires substantial computational resources, making it an expensive endeavor. Therefore, exploring cost-saving strategies during the testing phase is crucial to balance the need for thorough evaluation with the constraints of resource availability. To address this, our approach begins by transferring the moderation knowledge from an LLM to a small model. Subsequently, we deploy two distinct strategies for generating malicious queries: one based on a syntax tree approach, and the other leveraging an LLM-based method. Finally, our approach incorporates a sequential filter-test process designed to identify test cases that are prone to eliciting toxic responses. By doing so, we significantly curtail unnecessary or unproductive interactions with LLMs, thereby streamlining the testing process. Our research evaluated the efficacy of DistillSeq across four LLMs: GPT-3.5, GPT-4.0, Vicuna-13B, and Llama-13B. In the absence of DistillSeq, the observed attack success rates on these LLMs stood at 31.5% for GPT-3.5, 21.4% for GPT-4.0, 28.3% for Vicuna-13B, and 30.9% for Llama-13B. However, upon the application of DistillSeq, these success rates notably increased to 58.5%, 50.7%, 52.5%, and 54.4%, respectively. This translated to an average escalation in attack success rate by a factor of 93.0% when compared to scenarios without the use of DistillSeq. Such findings highlight the significant enhancement DistillSeq offers in terms of reducing the time and resource investment required for effectively testing LLMs. © 2024 Owner/Author. |
会议录编者/会议主办者 | ACM SIGSOFT ; AITO |
关键词 | Digital elevation model Model checking Risk assessment Risk perception Automated testing Codegeneration Comprehensive evaluation Diverse domains Extensive testing Knowledge distillation Language model Large language model Natural language understanding Testing/Evaluation |
会议名称 | 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2024 |
会议地点 | Vienna, Austria |
会议日期 | September 16, 2024 - September 20, 2024 |
URL | 查看原文 |
收录类别 | EI |
语种 | 英语 |
出版者 | Association for Computing Machinery, Inc |
EI入藏号 | 20244117161099 |
EI分类号 | 1102.1 ; 1106.2 ; 1106.3.1 ; 1108 ; 914.1 Accidents and Accident Prevention |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/436541 |
专题 | 信息科学与技术学院_PI研究组_陈宇奇 信息科学与技术学院_硕士生 |
通讯作者 | Chen, Yuqi |
作者单位 | 1.ShanghaiTech University, Shanghai, China; 2.Nanyang Technological University, Singapore, Singapore |
第一作者单位 | 上海科技大学 |
通讯作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Yang, Mingke,Chen, Yuqi,Liu, Yi,et al. DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation[C]//ACM SIGSOFT, AITO:Association for Computing Machinery, Inc,2024:578-589. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
个性服务 |
查看访问统计 |
谷歌学术 |
谷歌学术中相似的文章 |
[Yang, Mingke]的文章 |
[Chen, Yuqi]的文章 |
[Liu, Yi]的文章 |
百度学术 |
百度学术中相似的文章 |
[Yang, Mingke]的文章 |
[Chen, Yuqi]的文章 |
[Liu, Yi]的文章 |
必应学术 |
必应学术中相似的文章 |
[Yang, Mingke]的文章 |
[Chen, Yuqi]的文章 |
[Liu, Yi]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。