消息
×
loading..
Adversarial attack and defense of structured prediction models
2020
会议录名称EMNLP 2020 - 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, PROCEEDINGS OF THE CONFERENCE
页码2327-2338
发表状态已发表
摘要

Building an effective adversarial attacker and elaborating on countermeasures for adversarial attacks for natural language processing (NLP) have attracted a lot of research in recent years. However, most of the existing approaches focus on classification problems. In this paper, we investigate attacks and defenses for structured prediction tasks in NLP. Besides the difficulty of perturbing discrete words and the sentence fluency problem faced by attackers in any NLP tasks, there is a specific challenge to attackers of structured prediction models: the structured output of structured prediction models is sensitive to small perturbations in the input. To address these problems, we propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model with feedbacks from multiple reference models of the same structured prediction task. Based on the proposed attack, we further reinforce the victim model with adversarial training, making its prediction more robust and accurate. We evaluate the proposed framework in dependency parsing and part-of-speech tagging. Automatic and human evaluations show that our proposed framework succeeds in both attacking state-of-the-art structured prediction models and boosting them with adversarial training. © 2020 Association for Computational Linguistics

会议录编者/会议主办者Amazon Science ; Apple ; Baidu ; Bloomberg Engineering ; et al. ; Google Research
关键词Classification (of information) Computational linguistics Forecasting Natural language processing systems Dependency parsing Learn+ Multiple references Prediction modelling Prediction tasks Reference modeling Sequence models Small perturbations Structured prediction Unified framework
会议名称2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
会议地点Virtual, Online
会议日期November 16, 2020 - November 20, 2020
收录类别EI
语种英语
出版者Association for Computational Linguistics (ACL)
EI入藏号20214511120662
EI主题词Syntactics
EI分类号716.1 Information Theory and Signal Processing ; 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory ; 723.2 Data Processing and Image Processing ; 903.1 Information Sources and Analysis
原始文献类型Conference article (CA)
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/251880
专题信息科学与技术学院_博士生
信息科学与技术学院_PI研究组_屠可伟组
通讯作者Tu, Kewei
作者单位
1.School of Computing, National University of Singapore, Singapore;
2.School of Information Science and Technology, ShanghaiTech University, China;
3.Shanghai Engineering Research Center of Intelligent Vision and Imaging, China;
4.Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, China;
5.University of Chinese Academy of Sciences, China;
6.Alibaba DAMO Academy, Alibaba Group, China
通讯作者单位信息科学与技术学院
推荐引用方式
GB/T 7714
Han, Wenjuan,Zhang, Liwen,Jiang, Yong,et al. Adversarial attack and defense of structured prediction models[C]//Amazon Science, Apple, Baidu, Bloomberg Engineering, et al., Google Research:Association for Computational Linguistics (ACL),2020:2327-2338.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Han, Wenjuan]的文章
[Zhang, Liwen]的文章
[Jiang, Yong]的文章
百度学术
百度学术中相似的文章
[Han, Wenjuan]的文章
[Zhang, Liwen]的文章
[Jiang, Yong]的文章
必应学术
必应学术中相似的文章
[Han, Wenjuan]的文章
[Zhang, Liwen]的文章
[Jiang, Yong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。