期刊文献+

基于替代模型的批量零阶梯度符号算法

Batch Zeroth Order Gradient Symbol Method Based on Substitution Model
下载PDF
导出
摘要 在面向神经网络的对抗攻击领域中,针对黑盒模型进行的通用攻击,如何生成导致多数样本输出错误的通用扰动是亟待解决的问题。然而,现有黑盒通用扰动生成算法的攻击效果不佳,且生成的扰动易被肉眼察觉。针对该问题,以典型卷积神经网络为研究对象,提出基于替代模型的批量零阶梯度符号算法。该算法通过对替代模型集合进行白盒攻击来初始化通用扰动,并在黑盒条件下查询目标模型,实现对通用扰动的稳定高效更新。在CIFAR-10和SVHN两个数据集上的实验结果表明,与基线算法对比,该算法攻击能力显著提升,其生成通用扰动的性能提高了近3倍。 In the field of adversarial attacks for neural networks,for universal attacks on black-box model,how to generate universal perturbation which can cause most sample output errors is an urgent problem to be solved.However,existing black-box universal perturbation generation methods have poor attack effects and the generated perturbations are easy to be detected by the naked eye.To solve this problem,this paper takes the typical convolutional neural networks as the research object and proposed batch zeroth order gradient symbol method based on substitution model.This method initializes universal perturbation with white-box attacks on a set of alternative models,then realizes the stable and efficient updating of the universal perturbation by querying the target model under the black-box condition.Experimental results on two image retrieval datasets(CIFAR-10 and SVHN)show that the attack capability of this method is significantly improved,and the performance of generating universal perturbation is increased by 3 times.
作者 李炎达 范纯龙 滕一平 于铠博 LI Yanda;FAN Chunlong;TENG Yiping;YU Kaibo(School of Computer Science,Shenyang Aerospace University,Shenyang 110136,China)
出处 《计算机科学》 CSCD 北大核心 2023年第S02期851-856,共6页 Computer Science
基金 国家自然科学基金青年基金(61902260) 辽宁省教育厅科学研究资助项目(JYT2020026)。
关键词 卷积神经网络 通用扰动 对抗攻击 黑盒攻击 替代模型 Convolutional neural network Universal perturbation Adversarial attack Black-box attack Substitution model
  • 相关文献

参考文献2

二级参考文献3

共引文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部