期刊文献+

S-JSMA:一种低扰动冗余的快速JSMA对抗样本生成方法

S-JSMA:A fast JSMA adversarial example generation method with low disturbance redundancy
下载PDF
导出
摘要 基于深度学习神经网络模型的技术被广泛应用在计算机视觉、自然语言处理等领域。然而,研究人员发现,神经网络模型自身存在着显著的安全隐患,例如,容易遭到对抗样本的攻击。研究针对图像分类的对抗样本相关技术能帮助人们认识到神经网络模型的脆弱性,进而推动相关模型的安全加固机制研究。针对JSMA方法存在高时间开销与扰动冗余的问题,提出了一种低扰动冗余的快速JSMA对抗样本生成方法S-JSMA。该方法使用单步操作替代迭代操作以简化JSMA的算法流程,并使用简易扰动取代JSMA中基于显著图的扰动,从而极大地降低了对抗样本生成的时间开销和扰动冗余。基于MNIST数据集的实验结果表明,相较于JSMA和FGSM方法,S-JSMA能在显著短的时间内取得较好的攻击效果。 Techniques based on deep learning neural network models are widely used in computer vision,natural language processing,and other fields.However,researchers have found that neural network models have significant security risks,such as vulnerability to adversarial sample attacks.Study-ing the techniques related to adversarial samples for image classification can help people recognize the vulnerability of neural network models,which in turn can promote the research of security hardening mechanisms for related models.To overcome the challenges of high time overhead and perturbation redundancy of the JSMA method,a fast JSMA adversarial example generation method with low distur-bance redundancy called S-JSMA is proposed.The S-JSMA method replaces the iterative operation with a single-step one to simplify the work flow of the JSMA algorithm.Moreover,the proposed method adopts a simple perturbation rule rather than the salient graph based perturbation used in JSMA.Consequently,S-JSMA significantly reduces the time overhead and the disturbance redundancy of generating adversarial examples.The experimental results on the MNIST dataset demonstrate that,compared with the JSMA and the FGSM methods,the proposed S-JSMA achieves considerable attacking effects with a significantly shorter time period.
作者 刘强 李沐春 伍晓洁 王煜恒 LIU Qiang;LI Mu-chun;WU Xiao-jie;WANG Yu-heng(College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China)
出处 《计算机工程与科学》 CSCD 北大核心 2024年第8期1395-1402,共8页 Computer Engineering & Science
基金 湖南省自然科学基金(2021JJ30779)。
关键词 深度学习 神经网络 图像分类 对抗样本 deep learning neural network image classification adversarial example
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部