摘要
对抗攻击(Adversarial Attacks)通过向深度神经网络模型输入中添加精心设计、难以察觉的攻击数据来干扰模型运行,使其产生错误输出.对抗攻击可能会带来非常严重的问题,对其进行有效防御意义重大.本文将具有不同空间频率特性的Perlin噪声加入模型输入,通过具有空间特性的噪声掩盖攻击数据并研究其对三种具有代表性的对抗攻击方法:快速梯度方法(Fast Gradient Signed Method,FGSM)、投影梯度下降方法(Projected Gradient Descent,PGD)和稀疏L1下降法(Sparse L1 Descent,SLD)的防御效果.结果显示:(1)Perlin噪声提升了模型的准确性和鲁棒性;(2)不同空间频率Perlin噪声的防御效果存在差异;(3)在面对SLD攻击时,Perlin噪声的防御效果优于使用无空间结构噪声的防御效果.上述结果表明,Perlin噪声提升了模型的准确性和鲁棒性,在面对SLD攻击时有良好的防御效果.
Adversarial Attacks interfere with the operation of a deep neural network model by adding carefully designed,hard-to-detect attack data to the model's inputs,causing it to produce erroneous outputs.Adversarial attacks can cause serious problems,and it is important to defend against them effectively.In this paper,Perlin noise with different spatial frequency characteristics was added to the model inputs and model training samples,and the attack data was masked by noise with spatial characteristics and investigated its effect on three representative counter-attack methods:Fast Gradient Signed Method(FGSM),Projected Gradient Descent(PGD),and Sparse L1 Descent(SLD).The results showed that:(1)Perlin noise improved the accuracy and robustness of the model;(2)there were differences in the defense effect of Perlin noise with different spatial frequencies;(3)the defense effect of Perlin noise was better than that of using spatially unstructured noise in the face of SLD attack.The above results showed that Perlin noise improves the accuracy and robustness of the model,and had a good defense effect in the face of SLD attacks.
作者
施霖
邓浩东
贺建峰
SHI Lin;DENG Haodong;HE Jianfeng(Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming 650500,China)
出处
《昆明理工大学学报(自然科学版)》
北大核心
2024年第4期128-137,共10页
Journal of Kunming University of Science and Technology(Natural Science)
基金
国家自然科学基金项目(62162033).