摘要
深度神经网络很容易受到精心设计的对抗样本攻击。虽然基于极大极小值优化的对抗训练方法能提升网络的鲁棒性,但是对抗训练比正常训练需要更大容量和更多参数的模型。为了获得一个高鲁棒性和高稀疏度的网络模型,该文从模型压缩角度出发通过实验分析模型精度、鲁棒性和稀疏性之间的关系,并根据鲁棒网络稀疏敏感特性提出一种基于稀疏敏感的鲁棒网络非结构剪枝算法。在Mnist和Cifar10数据集上的白盒攻击实验结果表明,该算法在采用较大剪枝率时仍能保持高模型精度和高鲁棒性。在黑盒攻击下,基于该算法的稀疏模型的鲁棒精度甚至能超过未剪枝模型。
Deep neural networks are vulnerable to crafted adversarial attacks.Adversarial training method based on min-max optimization may boost the robustness of neural networks.However,adversarial training requires a larger capacity and more parameters of the network than that for natural training.To obtain a network model with high robustness and sparsity,from the perspective of model compression to lighten the burden,this paper analyzed the relationship between model accuracy,robustness and sparsity.According to the sparse characteristics of the robust network,this paper proposed a new unstructured pruning method based on the sparse sensitivity of robust networks.White box attack experiments on Mnist and Cifar10 datasets show that robust networks maintain high model accuracy and robustness while using large pruning rates.The robustness and accuracy of sparse networks under black box attack based on the method are even better than the dense networks.
作者
李平
袁晓彤
Li Ping;Yuan Xiaotong(School of Automation,Nanjing University of Information Science and Technology,Nanjing 210044,Jiangsu,China;Jiangsu Key Laboratory of Big Data Analysis Technology,Collaborative Innovation Center of Atmospheric Environment and Equipment Technology,Nanjing 210044,Jiangsu,China)
出处
《计算机应用与软件》
北大核心
2023年第5期200-206,共7页
Computer Applications and Software
基金
国家新一代人工智能重大项目(2018AAA0100400)
国家自然科学基金项目(61876090,61936005)。
关键词
鲁棒性
对抗训练
非结构剪枝
稀疏敏感度
Robustness
Adversarial training
Unstructured pruning
Sparse sensitivity