期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks
1
作者 Hong Huang Yunfei Wang +1 位作者 Guotao Yuan Xin Li 《Computers, Materials & Continua》 SCIE EI 2024年第7期361-387,共27页
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim... Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods. 展开更多
关键词 Image classification model backdoor attack gaussian distribution artificial intelligence(AI)security
下载PDF
A backdoor attack against quantum neural networks with limited information
2
作者 黄晨猗 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第10期219-228,共10页
Backdoor attacks are emerging security threats to deep neural networks.In these attacks,adversaries manipulate the network by constructing training samples embedded with backdoor triggers.The backdoored model performs... Backdoor attacks are emerging security threats to deep neural networks.In these attacks,adversaries manipulate the network by constructing training samples embedded with backdoor triggers.The backdoored model performs as expected on clean test samples but consistently misclassifies samples containing the backdoor trigger as a specific target label.While quantum neural networks(QNNs)have shown promise in surpassing their classical counterparts in certain machine learning tasks,they are also susceptible to backdoor attacks.However,current attacks on QNNs are constrained by the adversary's understanding of the model structure and specific encoding methods.Given the diversity of encoding methods and model structures in QNNs,the effectiveness of such backdoor attacks remains uncertain.In this paper,we propose an algorithm that leverages dataset-based optimization to initiate backdoor attacks.A malicious adversary can embed backdoor triggers into a QNN model by poisoning only a small portion of the data.The victim QNN maintains high accuracy on clean test samples without the trigger but outputs the target label set by the adversary when predicting samples with the trigger.Furthermore,our proposed attack cannot be easily resisted by existing backdoor detection methods. 展开更多
关键词 backdoor attack quantum artificial intelligence security quantum neural network variational quantum circuit
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部