期刊文献+

Convolutional Neural Network Visualization in Adversarial Example Attack

原文传递
导出
摘要 In deep learning,repeated convolution and pooling processes help to learn image features,but complex nonlinear operations make deep learning models difficult for users to understand.Adversarial example attack is a unique form of attack in deep learning.The attacker attacks the model by applying invisible changes to the picture,affecting the results of the model judgment.In this paper,a research is implemented on the adversarial example attack and neural network interpretability.The neural network interpretability research is believed to have considerable potential in resisting adversarial examples.It helped understand how the adversarial examples induce the neural network to make a wrong judgment and identify adversarial examples in the test set.The corresponding algorithm was designed and the image recognition model was built based on the ImageNet training set.And then the adversarial-example generation algorithm and the neural network visualization algorithm were designed to determine the model learning heat map of the original example and the adversarial-example.The results show that it develops the application of neural network interpretability in the field of resisting adversarial-example attacks.
机构地区 School of Information
出处 《国际计算机前沿大会会议论文集》 2020年第1期247-258,共12页 International Conference of Pioneering Computer Scientists, Engineers and Educators(ICPCSEE)
基金 National Defense Science and Technology Innovation Special Zone Project(No.18-163-11-ZT-002-045-04).
  • 相关文献

参考文献4

二级参考文献11

共引文献65

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部