Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu...Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.展开更多
Recent years have seen the wide application of natural language processing(NLP)models in crucial areas such as finance,medical treatment,and news media,raising concerns about the model robustness and vulnerabilities.W...Recent years have seen the wide application of natural language processing(NLP)models in crucial areas such as finance,medical treatment,and news media,raising concerns about the model robustness and vulnerabilities.We find that prompt paradigm can probe special robust defects of pre-trained language models.Malicious prompt texts are first constructed for inputs and a pre-trained language model can generate adversarial examples for victim models via mask-filling.Experimental results show that prompt paradigm can efficiently generate more diverse adversarial examples besides synonym substitution.Then,we propose a novel robust training approach based on prompt paradigm which incorporates prompt texts as the alternatives to adversarial examples and enhances robustness under a lightweight minimax-style optimization framework.Experiments on three real-world tasks and two deep neural models show that our approach can significantly improve the robustness of models to resist adversarial attacks.展开更多
文摘Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.
基金National Key R&D Program of China(No.2021AAA0140203)Zhejiang Provincial Key Research and Development Program of China(No.2021C01164)National Natural Science Foundation of China(Nos.61972384,62132020,and 62203425).
文摘Recent years have seen the wide application of natural language processing(NLP)models in crucial areas such as finance,medical treatment,and news media,raising concerns about the model robustness and vulnerabilities.We find that prompt paradigm can probe special robust defects of pre-trained language models.Malicious prompt texts are first constructed for inputs and a pre-trained language model can generate adversarial examples for victim models via mask-filling.Experimental results show that prompt paradigm can efficiently generate more diverse adversarial examples besides synonym substitution.Then,we propose a novel robust training approach based on prompt paradigm which incorporates prompt texts as the alternatives to adversarial examples and enhances robustness under a lightweight minimax-style optimization framework.Experiments on three real-world tasks and two deep neural models show that our approach can significantly improve the robustness of models to resist adversarial attacks.