摘要
基于梯度攻击对图像进行修改,可造成基于神经网络的分类技术的精确度降低10%左右,针对这一问题,提出利用网络空间领域里移动目标防御思想来增加神经网络对抗该类攻击的鲁棒性。定义整体网络集的“区别免疫”概念,将网络中防御方和用户之间的交互模拟为一个重复贝叶斯-斯坦科尔伯格博弈过程。基于此从该组网络集中挑选出一个受训练的网络对输入图像进行分类。该防御方法能减少MNIST数据库中受干扰图像的分类错误,同时对于正常的测试图像保持较高的分类精度。该方法可以与现有的防御机制结合使用,确保神经网络安全性。
Gradient-based attack can reduce the accuracy of classification technology based on neural network by about 10%by modifying the image.To solve the problem,the idea of moving target defense in cyberspace is used to enhance the robustness of neural network against such attacks.The concept of“differential immunity”of the whole network set was defined,and the interaction between the defender and the user in the network was simulated as a repeated Bayesian Stackelberg games process.Based on this process,a trained network was selected from this set of networks to classify the input images.The defense method can reduce the classification errors of interfered images in MNIST database,and maintain high classification accuracy for normal test images.The method can also be combined with existing defense mechanism to ensure the security of neural networks.
作者
王芳
周湘贞
Wang Fang;Zhou Xiangzhen(Chongqing Business Vocational College,Chongqing 401331,China;School of Computer Science and Engineering,Beihang University,Beijing 100191,China;Department of Information Engineering,Zhengzhou Shengda University of Economics,Business and Management,Zhengzhou 451191,Henan,China)
出处
《计算机应用与软件》
北大核心
2021年第3期142-146,共5页
Computer Applications and Software
基金
国家自然科学基金面上项目(61672077)
2018年度河南省重点研发与推广专项支持项目(182102110277)。