摘要
随着深度学习技术在计算机视觉、网络安全、自然语言处理等领域的进一步发展,深度学习技术逐渐暴露了一定的安全隐患。现有的深度学习算法无法有效描述数据本质特征,导致算法面对恶意输入时可能无法给出正确结果。以当前深度学习面临的安全威胁为出发点,介绍了深度学习中的对抗样本问题,梳理了现有的对抗样本存在性解释,回顾了经典的对抗样本构造方法并对其进行了分类,简述了近年来部分对抗样本在不同场景中的应用实例,对比了若干对抗样本防御技术,最后归纳对抗样本研究领域存在的问题并对这一领域的发展趋势进行了展望。
With the further promotion of deep learning technology in the fields of computer vision,network security and natural language processing,which has gradually exposed certain security risks.Existing deep learning algorithms can not effectively describe the essential characteristics of data or its inherent causal relationship.When the algorithm faces malicious input,it often fails to give correct judgment results.Based on the current security threats of deep learning,the adversarial example problem and its characteristics in deep learning applications were introduced,hypotheses on the existence of adversarial examples were summarized,classic adversarial example construction methods were reviewed and recent research status in different scenarios were summarized,several defense techniques in different processes were compared,and finally the development trend of adversarial example research were forecasted.
作者
段广晗
马春光
宋蕾
武朋
DUAN Guanghan;MA Chunguang;SONG Lei;WU Peng(College of Computer Science and Technology,Harbin Engineering University,Harbin 150001,China;College of Computer Science and Engineering,Shandong University of Science and Technology,Qingdao 266590,China)
出处
《网络与信息安全学报》
2020年第2期1-11,共11页
Chinese Journal of Network and Information Security
基金
国家自然科学基金资助项目(No.61472097,No.61932005,No.U1936112)
黑龙江省自然科学基金资助项目(No.JJ2019LH1770)。
关键词
对抗样本
深度学习
安全威胁
防御技术
adversarial example
deep learning
security threat
defense technology