摘要
深度神经网络拥有出色的特征提取能力,这使它们能够从人类难以察觉和理解的样本中提取和学习特征.这种能力一直是深度神经网络快速发展和广泛部署的推动力,近年来它们在医疗成像、自动驾驶、遥感观测和人脸识别等多个领域都有出色表现.然而,这种强大的特征提取能力也带来了一些潜在的安全风险.研究人员发现,深度神经网络模型很容易受到对抗样本的影响,即使少量精心设计的扰动也会导致模型产生错误的结果.为了探索深度神经网络的安全威胁和模型的鲁棒性,对抗攻击研究成为一项重要工作.在这项研究中,本文提出了一种基于贝叶斯优化的瞬间激光物理对抗攻击方法,利用有效曝光时间和激光的快速性在合适的时机发起真正不引起人眼察觉的瞬间攻击,检验了激光对抗攻击在数字域和物理域的有效性,并探讨了这种攻击方法在自动驾驶场景下可能产生的威胁.此外,在物理实验中验证了瞬间攻击的存在性并对其进行了分析,验证了针对自动驾驶系统的激光攻击存在合适的攻击窗口.
Deep neural networks exhibit superior feature extraction capabilities,which enable them to extract and learn features from samples that are challenging for humans to detect and comprehend.This ability has been a driving force for their rapid development and widespread deployment,which has resulted in exceptional performance in various fields such as medical imaging,autonomous driving,remote sensing observation,and face recognition.However,this powerful feature extraction ability has also brought forth some potential security risks.Researchers have discovered that deep neural network models are susceptible to adversarial samples,whereby a small amount of carefully crafted perturbation can cause the model to produce erroneous results.In order to explore the security threats to deep neural networks and the robustness of models,adversarial attack research is of utmost importance.This study proposed a“in-a-blink”laser physical adversarial attack based on Bayesian optimization,which can conduct a real imperceptible“in-a-blink”attack at the right moment.The effectiveness of laser countermeasures attacks in both the digital and physical domains was tested,and the possible threats that this attack method may generate in autonomous driving scenarios were discussed.In addition,the existence of“in-a-blink”attacks in physical experiments was verified and analyzed,and the existence of the optimal window for laser attack against the auto drive system was verified.
作者
吴瀚宇
杨丽蕴
吴昊
徐鹏
田玲
WU Hanyu;YANG Liyun;WU Hao;XU Peng;TIAN Ling(University of Electronic Science and Technology of China,Chengdu 611731,China;China Electronics Standardization Institute,Beijing 100007,China)
关键词
深度神经网络
计算机视觉
对抗攻击
物理攻击
deep neural networks
computer vision
adversarial attack
physical adversarial attack