| |||||||
ShanghaiTech University Knowledge Management System
Adversarial attack and defense of structured prediction models | |
2020 | |
会议录名称 | EMNLP 2020 - 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, PROCEEDINGS OF THE CONFERENCE |
页码 | 2327-2338 |
发表状态 | 已发表 |
摘要 | Building an effective adversarial attacker and elaborating on countermeasures for adversarial attacks for natural language processing (NLP) have attracted a lot of research in recent years. However, most of the existing approaches focus on classification problems. In this paper, we investigate attacks and defenses for structured prediction tasks in NLP. Besides the difficulty of perturbing discrete words and the sentence fluency problem faced by attackers in any NLP tasks, there is a specific challenge to attackers of structured prediction models: the structured output of structured prediction models is sensitive to small perturbations in the input. To address these problems, we propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model with feedbacks from multiple reference models of the same structured prediction task. Based on the proposed attack, we further reinforce the victim model with adversarial training, making its prediction more robust and accurate. We evaluate the proposed framework in dependency parsing and part-of-speech tagging. Automatic and human evaluations show that our proposed framework succeeds in both attacking state-of-the-art structured prediction models and boosting them with adversarial training. © 2020 Association for Computational Linguistics |
会议录编者/会议主办者 | Amazon Science ; Apple ; Baidu ; Bloomberg Engineering ; et al. ; Google Research |
关键词 | Classification (of information) Computational linguistics Forecasting Natural language processing systems Dependency parsing Learn+ Multiple references Prediction modelling Prediction tasks Reference modeling Sequence models Small perturbations Structured prediction Unified framework |
会议名称 | 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 |
会议地点 | Virtual, Online |
会议日期 | November 16, 2020 - November 20, 2020 |
收录类别 | EI |
语种 | 英语 |
出版者 | Association for Computational Linguistics (ACL) |
EI入藏号 | 20214511120662 |
EI主题词 | Syntactics |
EI分类号 | 716.1 Information Theory and Signal Processing ; 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory ; 723.2 Data Processing and Image Processing ; 903.1 Information Sources and Analysis |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/251880 |
专题 | 信息科学与技术学院_博士生 信息科学与技术学院_PI研究组_屠可伟组 |
通讯作者 | Tu, Kewei |
作者单位 | 1.School of Computing, National University of Singapore, Singapore; 2.School of Information Science and Technology, ShanghaiTech University, China; 3.Shanghai Engineering Research Center of Intelligent Vision and Imaging, China; 4.Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, China; 5.University of Chinese Academy of Sciences, China; 6.Alibaba DAMO Academy, Alibaba Group, China |
通讯作者单位 | 信息科学与技术学院 |
推荐引用方式 GB/T 7714 | Han, Wenjuan,Zhang, Liwen,Jiang, Yong,et al. Adversarial attack and defense of structured prediction models[C]//Amazon Science, Apple, Baidu, Bloomberg Engineering, et al., Google Research:Association for Computational Linguistics (ACL),2020:2327-2338. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。