期刊文献+

针对图像分类的鲁棒物理域对抗伪装

Robust Physical Adversarial Camouflages for Image Classifiers
下载PDF
导出
摘要 深度学习模型对对抗样本表现出脆弱性.作为一种对现实世界深度系统更具威胁性的攻击形式,物理域对抗样本近年来受到了广泛的研究关注.现有方法大多利用局部对抗贴片噪声在物理域实现对图像分类模型的攻击,然而二维贴片在三维空间的攻击效果将由于视角变化而不可避免地下降.为了解决这一问题,所提Adv-Camou方法利用空间组合变换来实时生成任意视角及变换背景的训练样本,并最小化预测类与目标类交叉熵损失,使模型输出指定错误类别.此外,所建立的仿真三维场景能公平且可重复地评估不同的攻击.实验结果表明,Adv-Camou生成的一体式对抗伪装可在全视角欺骗智能图像分类器,在三维仿真场景比多贴片拼接纹理平均有目标攻击成功率高出25%以上,对Clarifai商用分类系统黑盒有目标攻击成功率达42%,此外3D打印模型实验在现实世界平均攻击成功率约为66%,展现出先进的攻击性能. Deep learning models are vulnerable to adversarial examples.As a more threatening type for practical deep learning systems,physical adversarial examples have received extensive research attention in recent years.Most of the exist⁃ing methods use the local adversarial patch noise to attack the image classification model in the physical world.However,the attack effect of 2D patches in 3D space would inevitably decline due to the change in the view angle.To address this is⁃sue,the proposed Adv-Camou method uses spatial combination transformation to generate training examples of arbitrary viewpoints and transformed backgrounds in real time.Moreover,the cross-entropy loss between the prediction class and tar⁃get class is minimized to make the model output the specified incorrect class.In addition,the established 3D scene can eval⁃uate different attacks fairly and reproducibly.The experimental results show that the coated adversarial camouflage generat⁃ed by the Adv-Camou method can fool image classifiers from arbitrary viewpoints.In the 3D simulation scene,the average targeted attack success rate of Adv-Camou is more than 25%higher than that of piecing together patches.The success rate of black-box targeted attacks on the Clarifai commercial classification system reaches 42%.In addition,the average attack success rate of 3D printing model experiments in the real world is about 66%,which significantly demonstrates that our method outperforms state-of-the-art methods.
作者 段晔鑫 贺正芸 张颂 詹达之 王田丰 林庚右 张锦 潘志松 DUAN Ye-xin;HE Zheng-yun;ZHANG Song;ZHAN Da-zhi;WANG Tian-feng;LIN Geng-you;ZHANG Jin;PAN Zhi-song(Zhenjiang Campus,Army Military Transportation University,Zhenjiang,Jiangsu 212003,China;Railway Transportation College,Hunan University of Technology,Zhuzhou,Hunan 412007,China;Department of Cyberspace Security,Beijing Electronic Science and Technology Institute,Beijing 100071,China;Command and Control Engineering College,Army Engineering University,Nanjing,Jiangsu 210007,China)
出处 《电子学报》 EI CAS CSCD 北大核心 2024年第3期863-871,共9页 Acta Electronica Sinica
基金 国家自然科学基金(No.62076251)。
关键词 对抗样本 对抗伪装 对抗攻击 图像分类 深度神经网络 adversarial example adversarial camouflage adversarial attack image classification deep neural net⁃work
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部