期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
1
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
下载PDF
Omni-Detection of Adversarial Examples with Diverse Magnitudes
2
作者 Ke Jianpeng WangWenqi +3 位作者 Yang Kang Wang Lina Ye Aoshuang Wang Run 《China Communications》 SCIE 2024年第12期139-151,共13页
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen... Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks. 展开更多
关键词 adversarial example detection ensemble learning feature maps fragile and high-intensity ad-versarial examples
下载PDF
A Cascade Model-Aware Generative Adversarial Example Detection Method
3
作者 Keji Han Yun Li Bin Xia 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第6期800-812,共13页
Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial tr... Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial training method, the adversarial example detection algorithms check whether the specific example is adversarial, which is promising to solve the issue of the adversarial example. However, among the existing methods,model-aware detection methods do not generalize well, while the detection accuracies of the generative-based methods are lower compared to the model-aware methods. In this paper, we propose a cascade model-aware generative adversarial example detection method, namely CMAG. CMAG consists of two first-order reconstructors and a second-order reconstructor, which can illustrate what the model sees to the human by reconstructing the logit and feature maps of the last convolution layer. Experimental results demonstrate that our method is effective and is more interpretable compared to some state-of-the-art methods. 展开更多
关键词 information security Deep Neural Network(DNN) adversarial example detection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部