期刊文献+

基于注意力机制的对抗样本生成算法

Generating Adversarial Examples Based on Attention Mechanism
下载PDF
导出
摘要 近年来对抗性攻击和对抗性防御的研究受到了广泛的关注,并有了大量的应用.由于对样本的细小扰动可以改变识别效果,神经网络因而缺少鲁棒性.基于注意力机制的投影梯度算法,研究对抗样本的攻击方法.采用基于梯度加权类激活映射图寻找特殊区域,并添加噪声扰动,实现对抗性攻击.使用MNIST、CIFAR-10和ImageNet数据集,以VGG19、VGG16、Resnet50和Resnet18、inception_v3和Densenet作为目标模型.针对mini ImageNet数据集的攻击成功率达到96.3%,比FGSM攻击算法提高了23.4%的成功率,并减少干扰区域,不容易被肉眼察觉,具有更好的攻击效果. Recently the research of adversarial attack and adversarial defense has received extensive attention and has a large number of applications.Because the small disturbance to the sample can change the recognition effect,the neural network lacks robustness.In this paper,the projection gradient algorithm based on the attention mechanism is adopted,and researches the attack method against the sample is studied.This paper uses gradient-based weighted class activation maps to find special regions,and adds noise disturbances to achieve adversarial attacks.This article uses MNIST,CIFAR-10 and ImageNet datasets,with VGG19,VGG16,Resnet50,and Resnet18 as the target model.The attack success rate on the mini ImageNet data set reached 96.3%,which is 23.4%higher than the FGSM attack algorithm,and reduces the interference area.It is not easy to be detected by the naked eye and has a better attack effect.
作者 赵彬粟 李灵芳 罗明星 ZHAO Binsu;LI Linfang;LUO Mingxing(School of Information Science and Technology,Southwest Jiaotong University,Chengdu 611756,Sichuan)
出处 《四川师范大学学报(自然科学版)》 CAS 2023年第2期275-284,共10页 Journal of Sichuan Normal University(Natural Science)
基金 国家自然科学基金(61303039和62172341)。
关键词 对抗样本 注意力机制 深度神经网络 对抗攻击 adversarial examples attention mechanisms deep neural networks adversarial attack
  • 相关文献

参考文献1

共引文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部