期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
1
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
下载PDF
Enhancing Healthcare Data Security and Disease Detection Using Crossover-Based Multilayer Perceptron in Smart Healthcare Systems
2
作者 Mustufa Haider Abidi Hisham Alkhalefah Mohamed K.Aboudaif 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期977-997,共21页
The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthca... The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%. 展开更多
关键词 Smart healthcare systems multilayer perceptron CYBERSECURITY adversarial attack detection Healthcare 4.0
下载PDF
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
3
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 adversarial attacks face aerification adversarial detection perturbation removal
下载PDF
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
4
作者 Cheng-Cheng Ma Bao-Yuan Wu +2 位作者 Yan-Bo Fan Yong Zhang Zhi-Feng Li 《Machine Intelligence Research》 EI CSCD 2023年第5期666-682,共17页
Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of o... Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods. 展开更多
关键词 adversarial defense adversarial detection generalized Gaussian distribution Benford-Fourier coefficients image classification
原文传递
Generative adversarial network based novelty detection using minimized reconstruction error 被引量:3
5
作者 Huan-gang WANG Xin LI Tao ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第1期116-125,共10页
Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of ... Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of a generator and a discriminator,both trained with the adversarial learning mechanism.In this paper,we introduce and investigate the use of GAN for novelty detection.In training,GAN learns from ordinary data.Then,using previously unknown data,the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns.The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman(TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling's T^2 and squared prediction error statistics. 展开更多
关键词 Generative adversarial network(GAN) Novelty detection Tennessee Eastman(TE) process
原文传递
A Cascade Model-Aware Generative Adversarial Example Detection Method
6
作者 Keji Han Yun Li Bin Xia 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2021年第6期800-812,共13页
Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial tr... Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial training method, the adversarial example detection algorithms check whether the specific example is adversarial, which is promising to solve the issue of the adversarial example. However, among the existing methods,model-aware detection methods do not generalize well, while the detection accuracies of the generative-based methods are lower compared to the model-aware methods. In this paper, we propose a cascade model-aware generative adversarial example detection method, namely CMAG. CMAG consists of two first-order reconstructors and a second-order reconstructor, which can illustrate what the model sees to the human by reconstructing the logit and feature maps of the last convolution layer. Experimental results demonstrate that our method is effective and is more interpretable compared to some state-of-the-art methods. 展开更多
关键词 information security Deep Neural Network(DNN) adversarial example detection
原文传递
An end-to-end convolutional network for joint detecting and denoising adversarial perturbations in vehicle classification
7
作者 Peng Liu Huiyuan Fu Huadong Ma 《Computational Visual Media》 EI CSCD 2021年第2期217-227,共11页
Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle cl... Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods. 展开更多
关键词 adversarial defense adversarial detection vehicle classification deep learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部