摘要
为了提高神经网络模型对样本攻击的防御能力,基于DeepFool,BIM,I-FGSM 3种算法设计了不同的对抗样本,并对其进行模型训练。经实验测试得到,DeepFool算法设计的对抗样本将准确率由91%下降至88%,BIM算法将准确率由80%下降至3%,I-FGSM算法将准确率由94%下降至40.78%和58.58%。实验结果表明,基于3种算法设计的对抗样本均能实现有效攻击。
Recent research showed that some adversarial networks alter the underlying characteristics of neural networks,resulting in misleading neural networks and reducing accuracy of deep learning models.In order to improve the defense capability of neural network models against adversarial attacks,varied adversarial examples were designed based on DeepFool,BIM,and I-FGSM algorithms,and were trained on the models.After testing,it was found that the DeepFool-based adversarial examples decreased the accuracy from 91%to 88%,the BIM-based from 80%to 3%,and the I-FGSM-based from 94%to 40.78%and 58.58%,which proved that the adversarial samples designed by the three algorithms can achieve effective attacks.
作者
许晗
XU Han(College of Information and Electronic Engineering,Liming Vocational University,Quanzhou 362000,China)
出处
《黎明职业大学学报》
2024年第2期93-102,共10页
Journal of LiMing Vocational University