期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Towards sustainable adversarial training with successive perturbation generation
1
作者 Wei LIN lichuan liao 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第4期527-539,共13页
Adversarial training with online-generated adversarial examples has achieved promising performance in defending adversarial attacks and improving robustness of convolutional neural network models.However,most existing... Adversarial training with online-generated adversarial examples has achieved promising performance in defending adversarial attacks and improving robustness of convolutional neural network models.However,most existing adversarial training methods are dedicated to finding strong adversarial examples for forcing the model to learn the adversarial data distribution,which inevitably imposes a large computational overhead and results in a decrease in the generalization performance on clean data.In this paper,we show that progressively enhancing the adversarial strength of adversarial examples across training epochs can effectively improve the model robustness,and appropriate model shifting can preserve the generalization performance of models in conjunction with negligible computational cost.To this end,we propose a successive perturbation generation scheme for adversarial training(SPGAT),which progressively strengthens the adversarial examples by adding the perturbations on adversarial examples transferred from the previous epoch and shifts models across the epochs to improve the efficiency of adversarial training.The proposed SPGAT is both efficient and effective;e.g.,the computation time of our method is 900 min as against the 4100 min duration observed in the case of standard adversarial training,and the performance boost is more than 7%and 3%in terms of adversarial accuracy and clean accuracy,respectively.We extensively evaluate the SPGAT on various datasets,including small-scale MNIST,middle-scale CIFAR-10,and large-scale CIFAR-100.The experimental results show that our method is more efficient while performing favorably against state-of-the-art methods. 展开更多
关键词 Adversarial training Adversarial attack Stochastic weight average Machine learning Model generalization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部