Efficient Detection of Toxic Prompts in Large Language Models
2024-10-27
会议录名称PROCEEDINGS OF THE 39TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING
ISSN1938-4300
页码455-467
发表状态已发表
DOI10.1145/3691620.3695018
摘要

Large language models (LLMs) like ChatGPT and Gemini have significantly advanced natural language processing, enabling various applications such as chatbots and automated content generation. However, these models can be exploited by malicious individuals who craft toxic prompts to elicit harmful or unethical responses. These individuals often employ jailbreaking techniques to bypass safety mechanisms, highlighting the need for robust toxic prompt detection methods. Existing detection techniques, both blackbox and whitebox, face challenges related to the diversity of toxic prompts, scalability, and computational efficiency. In response, we propose ToxicDetector, a lightweight greybox method designed to efficiently detect toxic prompts in LLMs. ToxicDetector leverages LLMs to create toxic concept prompts, uses embedding vectors to form feature vectors, and employs a Multi-Layer Perceptron (MLP) classifier for prompt classification. Our evaluation on various versions of the LLama models, Gemma-2, and multiple datasets demonstrates that ToxicDetector achieves a high accuracy of 96.39% and a low false positive rate of 2.00%, outperforming state-of-the-art methods. Additionally, ToxicDetector's processing time of 0.0780 seconds per prompt makes it highly suitable for real-time applications. ToxicDetector achieves high accuracy, efficiency, and scalability, making it a practical method for toxic prompt detection in LLMs.

会议录编者/会议主办者ACM ; ACM SIGAI ; Google ; IEEE ; Special Interest Group on Software Engineering (SIGSOFT) ; University of California, Davis (UC Davis)
关键词Modeling languages Natural language processing systems Problem oriented languages Program debugging Steganography Black boxes Chatbots Detection methods Efficient detection Grey-box High-accuracy Language model Language processing Natural languages Safety mechanisms
会议名称39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024
会议地点Sacramento, CA, USA
会议日期October 28, 2024 - November 1, 2024
URL查看原文
收录类别EI
语种英语
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Information Systems ; Computer Science, Interdisciplinary Applications ; Computer Science, Software Engineering
WOS记录号PPRN:91501099
出版者Association for Computing Machinery, Inc
EI入藏号20245117564313
EI主题词Scalability
EI分类号1101 ; 1106 ; 1106.1 ; 1106.1.1 ; 1106.2 ; 1106.4 ; 1106.7 ; 1108.2 ; 961 Systems Science
原始文献类型Conference article (CA)
来源库IEEE
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/415919
专题信息科学与技术学院_PI研究组_陈宇奇
信息科学与技术学院_本科生
信息科学与技术学院_博士生
通讯作者Chen, Yuqi
作者单位
1.Nanyang Technol Univ, Singapore, Singapore
2.ShanghaiTech Univ, Shanghai, Peoples R China
通讯作者单位上海科技大学
推荐引用方式
GB/T 7714
Liu, Yi,Yu, Junzhe,Sun, Huijia,et al. Efficient Detection of Toxic Prompts in Large Language Models[C]//ACM, ACM SIGAI, Google, IEEE, Special Interest Group on Software Engineering (SIGSOFT), University of California, Davis (UC Davis):Association for Computing Machinery, Inc,2024:455-467.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Yi]的文章
[Yu, Junzhe]的文章
[Sun, Huijia]的文章
百度学术
百度学术中相似的文章
[Liu, Yi]的文章
[Yu, Junzhe]的文章
[Sun, Huijia]的文章
必应学术
必应学术中相似的文章
[Liu, Yi]的文章
[Yu, Junzhe]的文章
[Sun, Huijia]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。