期刊文献+

基于可学习攻击步长的联合对抗训练方法

Joint adversarial training method based on learnable attack step size
下载PDF
导出
摘要 对抗训练(AT)是抵御对抗攻击的有力手段。然而,现有方法在训练效率和对抗鲁棒性之间往往难以平衡。部分方法提高训练效率但降低对抗鲁棒性,而其他方法则相反。为了找到最佳平衡点,提出了一种基于可学习攻击步长的联合对抗训练方法(FGSM-LASS)。该方法包括预测模型和目标模型,其中,预测模型为每个样本预测攻击步长,替代FGSM算法的固定大小攻击步长。接着,将目标模型参数和原始样本输入改进的FGSM算法,生成对抗样本。最后,采用联合训练策略,共同训练预测和目标模型。在与最新五种方法比较时,FGSM-LASS在速度上比鲁棒性最优的LAS-AT快6倍,而鲁棒性仅下降1%;与速度相近的ATAS相比,鲁棒性提升3%。实验结果证明,FGSM-LASS在训练速度和对抗鲁棒性之间的权衡表现优于现有方法。 AT is a powerful means to defend against adversarial attacks.However,currently available methods often struggle to strike a balance between training efficiency and adversarial robustness.Some methods increase training efficiency but decrease adversarial robustness,while others do the opposite.To achieve the best trade-off,this paper proposed a joint adversa-rial training method based on a learnable attack step size(FGSM-LASS).This method included a prediction model and a target model.The prediction model predicted an attack step size for each example,which replaced the fixed-size attack step size using in the FGSM algorithm.Subsequently,the improved FGSM algorithm feeded both the target model parameters and original examples to generate adversarial examples.Finally,the prediction model and the target model perform joint adversarial training using these adversarial examples.Compared to the five most recent methods,FGSM-LASS was six times faster than LAS-AT,which was the best performing method in terms of robustness,with only 1%decrease in robustness.It was 3%more robust than ATAS,which was comparable in speed.Extensive experimental results fully demonstrate that FGSM-LASS outperforms current methods in the trade-off between training speed and adversarial robustness.
作者 杨时康 柳毅 Yang Shikang;Liu Yi(School of Computer Science&Technology,Guangdong University of Technology,Guangzhou 510006,China)
出处 《计算机应用研究》 CSCD 北大核心 2024年第6期1845-1850,共6页 Application Research of Computers
基金 广东省重点研发项目(2021B0101200002)。
关键词 对抗训练 对抗样本 对抗攻击 预测模型 可学习攻击步长 adversarial training(AT) adversarial example adversarial attack prediction model learnable attack step size
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部