摘要
对抗样本揭示了神经网络模型的脆弱性,同时也是评估其鲁棒性的一种重要工具.对抗样本能成功攻击到未知网络模型的迁移特性,在实际应用场景中更具适用性.传统集成方法的基本粒度较大,限制了对抗样本的迁移能力.本文从模块化的角度出发,提出了一种新方法.首先,对传统集成方法中的基本步骤进行精细化重构,调整粒度大小,并进一步抽象化为一个单独的基本模块.然后,将其分为两个不同的类别,每类模块被赋予特定的职责,专注于完成单一且明确的任务.最后,融入动量机制,进一步强化对抗样本的迁移能力.实验数据表明,所提方法在不同的集成策略上均有较大的提升,有效性较高.
Adversarial examples reveal the vulnerabilities of neural network models while also serving as a crucial tool for assessing their robustness.The transferability of adversarial examples to attack unknown network models enhances their applicability in real-world scenarios.Traditional ensemble methods,characterized by their coarse granularity,constrain the transferability of adversarial examples.This paper introduces a novel approach from a modular perspective.Initially,the basic steps of traditional ensemble methods are finely restructured,so as to adjust their granularity and further abstract them into individual fundamental modules.Subsequently,these modules are categorized into two distinct classes,with each class assigned specific responsibilities,focusing on executing singular and precise tasks.Finally,incorporating a momentum mechanism into this method significantly enhances the transferability of adversarial examples.Experimental results demonstrate that the proposed method achieves substantial improvements across various ensemble strategies,thus confirming its effectiveness.
作者
蒲航
范永胜
PU Hang;FAN Yongsheng(College of Computer and Information Science,Chongqing Normal University,Chongqing 401331,China)
出处
《常熟理工学院学报》
2024年第5期60-66,共7页
Journal of Changshu Institute of Technology
关键词
对抗样本
集成方法
模块化
adversarial examples
ensemble method
modularity