摘要
对抗文本是一种能够使深度学习分类器作出错误判断的恶意样本,敌手通过向原始文本中加入人类难以察觉的微小扰动制作出能欺骗目标模型的对抗文本.研究对抗文本生成方法,能对深度神经网络的鲁棒性进行评价,并助力于模型后续的鲁棒性提升工作.当前针对中文文本设计的对抗文本生成方法中,很少有方法将鲁棒性较强的中文BERT模型作为目标模型进行攻击.面向中文文本分类任务,提出一种针对中文BERT的攻击方法Chinese BERT Tricker.该方法使用一种汉字级词语重要性打分方法——重要汉字定位法;同时基于掩码语言模型设计一种包含两类策略的适用于中文的词语级扰动方法实现对重要词语的替换.实验表明,针对文本分类任务,所提方法在两个真实数据集上均能使中文BERT模型的分类准确率大幅下降至40%以下,且其多种攻击性能明显强于其他基线方法.
Adversarial texts are malicious samples that can cause deep learning classifiers to make errors.The adversary creates an adversarial text that can deceive the target model by adding subtle perturbations to the original text that are imperceptible to humans.The study of adversarial text generation methods can evaluate the robustness of deep neural networks and contribute to the subsequent robustness improvement of the model.Among the current adversarial text generation methods designed for Chinese text,few attack the robust Chinese BERT model as the target model.For Chinese text classification tasks,this study proposes an attack method against Chinese BERT,that is Chinese BERT Tricker.This method adopts a character-level word importance scoring method,important Chinese character positioning.Meanwhile,a word-level perturbation method for Chinese based on the masked language model with two types of strategies is designed to achieve the replacement of important words.Experimental results show that for the text classification tasks,the proposed method can significantly reduce the classification accuracy of the Chinese BERT model to less than 40%on two real datasets,and it outperforms other baseline methods in terms of multiple attack performance.
作者
张云婷
叶麟
唐浩林
张宏莉
李尚
ZHANG Yun-Ting;YE Lin;TANG Hao-Lin;ZHANG Hong-Li;LI Shang(School of Cyberspace Science,Harbin Institute of Technology,Harbin 150001,China)
出处
《软件学报》
EI
CSCD
北大核心
2024年第7期3392-3409,共18页
Journal of Software
基金
国家自然科学基金(61872111)。
关键词
深度神经网络
对抗样本
文本对抗攻击
中文BERT
掩码语言模型
deep neural network(DNN)
adversarial example
textual adversarial attack
Chinese BERT
masked language model(MLM)