Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ...Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.展开更多
The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthca...The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%.展开更多
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi...Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.展开更多
Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of o...Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.展开更多
Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of ...Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of a generator and a discriminator,both trained with the adversarial learning mechanism.In this paper,we introduce and investigate the use of GAN for novelty detection.In training,GAN learns from ordinary data.Then,using previously unknown data,the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns.The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman(TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling's T^2 and squared prediction error statistics.展开更多
Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial tr...Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial training method, the adversarial example detection algorithms check whether the specific example is adversarial, which is promising to solve the issue of the adversarial example. However, among the existing methods,model-aware detection methods do not generalize well, while the detection accuracies of the generative-based methods are lower compared to the model-aware methods. In this paper, we propose a cascade model-aware generative adversarial example detection method, namely CMAG. CMAG consists of two first-order reconstructors and a second-order reconstructor, which can illustrate what the model sees to the human by reconstructing the logit and feature maps of the last convolution layer. Experimental results demonstrate that our method is effective and is more interpretable compared to some state-of-the-art methods.展开更多
Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle cl...Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods.展开更多
基金supported in part by the Natural Science Foundation of Hunan Province under Grant Nos.2023JJ30316 and 2022JJ2029in part by a project supported by Scientific Research Fund of Hunan Provincial Education Department under Grant No.22A0686+1 种基金in part by the National Natural Science Foundation of China under Grant No.62172058Researchers Supporting Project(No.RSP2023R102)King Saud University,Riyadh,Saudi Arabia.
文摘Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.
基金funded by King Saud University through Researchers Supporting Program Number (RSP2024R499).
文摘The healthcare data requires accurate disease detection analysis,real-timemonitoring,and advancements to ensure proper treatment for patients.Consequently,Machine Learning methods are widely utilized in Smart Healthcare Systems(SHS)to extract valuable features fromheterogeneous and high-dimensional healthcare data for predicting various diseases and monitoring patient activities.These methods are employed across different domains that are susceptible to adversarial attacks,necessitating careful consideration.Hence,this paper proposes a crossover-based Multilayer Perceptron(CMLP)model.The collected samples are pre-processed and fed into the crossover-based multilayer perceptron neural network to detect adversarial attacks on themedical records of patients.Once an attack is detected,healthcare professionals are promptly alerted to prevent data leakage.The paper utilizes two datasets,namely the synthetic dataset and the University of Queensland Vital Signs(UQVS)dataset,from which numerous samples are collected.Experimental results are conducted to evaluate the performance of the proposed CMLP model,utilizing various performancemeasures such as Recall,Precision,Accuracy,and F1-score to predict patient activities.Comparing the proposed method with existing approaches,it achieves the highest accuracy,precision,recall,and F1-score.Specifically,the proposedmethod achieves a precision of 93%,an accuracy of 97%,an F1-score of 92%,and a recall of 92%.
基金funded by Institutional Fund Projects under Grant No.(IFPIP:329-611-1443)the technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.
基金supported by Natural Science Foundation of China(No.62076213)Shenzhen Science and Technology Program,China(No.RCYX20210609103057050)+1 种基金the university development fund of The Chinese University of Hong Kong,Shenzhen,China(No.01001810)Guangdong Provincial Key Laboratory of Big Data Computing,The Chinese University of Hong Kong,Shenzhen,China.
文摘Adversarial example has been well known as a serious threat to deep neural networks(DNNs).In this work,we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution(GGD)but with different parameters(i.e.,shape factor,mean,and variance).GGD is a general distribution family that covers many popular distributions(e.g.,Laplacian,Gaussian,or uniform).Therefore,it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution.Besides,since the shape factor is more robust to different databases rather than the other two parameters,we propose to construct discriminative features via the shape factor for adversarial detection,employing the magnitude of Benford-Fourier(MBF)coefficients,which can be easily estimated using responses.Finally,a support vector machine is trained as an adversarial detector leveraging the MBF features.Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.
文摘Generative adversarial network(GAN) is the most exciting machine learning breakthrough in recent years,and it trains the learning model by finding the Nash equilibrium of a two-player zero-sum game.GAN is composed of a generator and a discriminator,both trained with the adversarial learning mechanism.In this paper,we introduce and investigate the use of GAN for novelty detection.In training,GAN learns from ordinary data.Then,using previously unknown data,the generator and the discriminator with the designed decision boundaries can both be used to separate novel patterns from ordinary patterns.The proposed GAN-based novelty detection method demonstrates a competitive performance on the MNIST digit database and the Tennessee Eastman(TE) benchmark process compared with the PCA-based novelty detection methods using Hotelling's T^2 and squared prediction error statistics.
基金supported by the National Natural Science Foundation of China (Nos.61603197,61772284,and 61876091)。
文摘Deep Neural Networks(DNNs) are demonstrated to be vulnerable to adversarial examples, which are elaborately crafted to fool learning models. Since the accuracy and robustness of DNNs are at odds for the adversarial training method, the adversarial example detection algorithms check whether the specific example is adversarial, which is promising to solve the issue of the adversarial example. However, among the existing methods,model-aware detection methods do not generalize well, while the detection accuracies of the generative-based methods are lower compared to the model-aware methods. In this paper, we propose a cascade model-aware generative adversarial example detection method, namely CMAG. CMAG consists of two first-order reconstructors and a second-order reconstructor, which can illustrate what the model sees to the human by reconstructing the logit and feature maps of the last convolution layer. Experimental results demonstrate that our method is effective and is more interpretable compared to some state-of-the-art methods.
基金supported in part by the National Natural Science Foundation of China(61872047,61720106007)the National Key R&D Program of China(2017YFB1003000)+1 种基金the Beijing Nova Program(Z201100006820124)the Beijing Natural Science Foundation(L191004),and the 111 Project(B18008).
文摘Deep convolutional neural networks(DCNNs)have been widely deployed in real-world scenarios.However,DCNNs are easily tricked by adversarial examples,which present challenges for critical applications,such as vehicle classification.To address this problem,we propose a novel end-to-end convolutional network for joint detection and removal of adversarial perturbations by denoising(DDAP).It gets rid of adversarial perturbations using the DDAP denoiser based on adversarial examples discovered by the DDAP detector.The proposed method can be regarded as a pre-processing step—it does not require modifying the structure of the vehicle classification model and hardly affects the classification results on clean images.We consider four kinds of adversarial attack(FGSM,BIM,DeepFool,PGD)to verify DDAP’s capabilities when trained on BIT-Vehicle and other public datasets.It provides better defense than other state-of-the-art defensive methods.