Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial ex...Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future.展开更多
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu...Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.展开更多
基金This work is supported by the NSFC[Grant Nos.61772281,61703212]the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)and Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology(CICAEET).
文摘Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future.
文摘Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.