期刊文献+

面向隐私保护的稀疏对抗攻击样本生成方法

Research on example generation method of sparse adversarial attack for privacy protection
下载PDF
导出
摘要 为了应对视频监控和社交网络分享等真实场景中深度神经网络对图像信息的过度挖掘,提出了一种稀疏对抗攻击样本的生成方法,旨在对抗深度神经网络,致其错误分类,无法完成后续未授权任务。对扰动像素数量、扰动幅度以及扰动位置等多个目标进行优化,并基于抽样方案简捷高效地生成对抗样本。与其他5种相关方法对比了对抗成功率、扰动像素数量、扰动幅度、扰动位置和优化效果等指标,并根据扰动像素的分布情况分析了目标模型的分类空间特征。通过迁移测试和在目标检测任务中的应用,对本文算法的泛化能力和实用性进行了评估。实验结果表明,算法在扰动率不超过1%的情况下,依然可以保证对深度神经网络的有效对抗,并显著优化了扰动像素幅度及扰动位置,对原始图像的破坏性更小,扰动更加不易感知。算法具有良好的泛化性和实用性。 Deep neural networks often overexploit image information in real scenes, such as video surveillance and social network sharing. In order to deal with this problem, a sparse adversarial attack generated method is proposed. This method is to fight against the deep neural network, causing it to be wrongly classified and unable to complete the subsequent tasks which are unauthorized. The number of disturbed pixels, the amplitude and the position of the perturbation are optimized. Based on the sampling scheme, the adversarial examples are generated easily and efficiently. The adversarial success rate, the number of disturbed pixels, the amplitude and the position of the perturbation and optimization effect are compared with other five related methods. According to the distribution of disturbed pixels, the classification space characteristics of the target model are analyzed. Through migration testing and application in object detection tasks, the generalization ability and practicality of the proposed algorithm are evaluated. The experimental results show that the algorithm can still guarantee the effective against the deep neural network when the perturbation rate is less than 1%, and significantly optimizes the amplitude and position of the disturbed pixels, which is less destructive to the original image and more difficult to detect the perturbation. The algorithm has good generalization and practicality.
作者 王涛 马川 陈淑平 尤殿龙 WANG Tao;MA Chuan;CHEN Shuping;YOU Dianlong(Schoo of Business Administration,Hebei Normal University of Science and Technology,Qinbuangdao,Hebei 066004,China;Engineering Training Center,Yanshan University,Qinhuangdao,Hebei 066004,China;Library,Yanshan University,Qinhuangdao,Hebei 066004,China;School of Information Science and Engineering,Yanshan University,Qinhuangdao,Hebei 066004,China)
出处 《燕山大学学报》 CAS 北大核心 2023年第6期538-549,共12页 Journal of Yanshan University
基金 国家自然科学基金资助项目(62276226) 河北省自然科学基金资助项目(F2021203038)。
关键词 深度神经网络 稀疏对抗攻击 对抗样本 抽样 隐私保护 deep neural network sparse adversarial attack adversarial example sampling privacy protection
  • 相关文献

参考文献4

二级参考文献15

共引文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部