期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Boosting Adversarial Training with Learnable Distribution
1
作者 Kai Chen Jinwei Wang +2 位作者 James Msughter Adeke Guangjie Liu Yuewei Dai 《Computers, Materials & Continua》 SCIE EI 2024年第3期3247-3265,共19页
In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.How... In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments. 展开更多
关键词 Adversarial training feature space learnable distribution distribution centroid
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部