期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Adaptive Backdoor Attack against Deep Neural Networks 被引量:1
1
作者 Honglu He Zhiying Zhu Xinpeng Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第9期2617-2633,共17页
In recent years,the number of parameters of deep neural networks(DNNs)has been increasing rapidly.The training of DNNs is typically computation-intensive.As a result,many users leverage cloud computing and outsource t... In recent years,the number of parameters of deep neural networks(DNNs)has been increasing rapidly.The training of DNNs is typically computation-intensive.As a result,many users leverage cloud computing and outsource their training procedures.Outsourcing computation results in a potential risk called backdoor attack,in which a welltrained DNN would performabnormally on inputs with a certain trigger.Backdoor attacks can also be classified as attacks that exploit fake images.However,most backdoor attacks design a uniformtrigger for all images,which can be easilydetectedand removed.In this paper,we propose a novel adaptivebackdoor attack.We overcome this defect and design a generator to assign a unique trigger for each image depending on its texture.To achieve this goal,we use a texture complexitymetric to create a specialmask for eachimage,which forces the trigger tobe embedded into the rich texture regions.The trigger is distributed in texture regions,which makes it invisible to humans.Besides the stealthiness of triggers,we limit the range of modification of backdoor models to evade detection.Experiments show that our method is efficient in multiple datasets,and traditional detectors cannot reveal the existence of a backdoor. 展开更多
关键词 Backdoor attack ai security DNN
下载PDF
A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks
2
作者 Hong Huang Yunfei Wang +1 位作者 Guotao Yuan Xin Li 《Computers, Materials & Continua》 SCIE EI 2024年第7期361-387,共27页
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim... Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods. 展开更多
关键词 Image classification model backdoor attack gaussian distribution Artificial Intelligence(ai)security
下载PDF
FMSA:a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems
3
作者 Kaisheng Fan Weizhe Zhang +1 位作者 Guangrui Liu Hui He 《Cybersecurity》 EI CSCD 2024年第1期110-121,共12页
Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuse... Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuses on imple-menting a model stealing attack on intrusion detection systems.Existing model stealing attacks are hard to imple-ment in practical network environments,as they either need private data of the victim dataset or frequent access to the victim model.In this paper,we propose a novel solution called Fast Model Stealing Attack(FMSA)to address the problem in the field of model stealing attacks.We also highlight the risks of using ML-NIDS in network security.First,meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state.Then,the number of accesses to the target model is used as an optimization term,resulting in minimal queries to achieve model stealing.Finally,adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data.Through experiments on multiple public datasets,compared to existing state-of-the-art algorithms,FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9%and the similarity with the target model to 90.1%.We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries. 展开更多
关键词 ai security Model stealing attack Network intrusion detection Meta learning
原文传递
Adversarial Example Generation Method Based on Sensitive Features
4
作者 WEN Zerui SHEN Zhidong +1 位作者 SUN Hui QI Baiwen 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2023年第1期35-44,共10页
As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to t... As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to the original image,which is inconceivable to the human but produces a massive error in the deep learning model.Existing attack methods have achieved good results when the network structure is known.However,in the case of unknown network structures,the effectiveness of the attacks still needs to be improved.Therefore,transfer-based attacks are now very popular because of their convenience and practicality,allowing adversarial samples generated on known models to be used in attacks on unknown models.In this paper,we extract sensitive features by Grad-CAM and propose two single-step attacks methods and a multi-step attack method to corrupt sensitive features.In two single-step attacks,one corrupts the features extracted from a single model and the other corrupts the features extracted from multiple models.In multi-step attack,our method improves the existing attack method,thus enhancing the adversarial sample transferability to achieve better results on unknown models.Our method is also validated on CIFAR-10 and MINST,and achieves a 1%-3%improvement in transferability. 展开更多
关键词 deep learning model adversarial example transferability sensitive characteristics ai security
原文传递
A survey of practical adversarial example attacks 被引量:1
5
作者 Lu Sun Mingtian Tan Zhe Zhou 《Cybersecurity》 2018年第1期213-221,共9页
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing re... Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing researches covered the methodologies of adversarial example generation,the root reason of the existence of adversarial examples,and some defense schemes.However practical attack against real world systems did not appear until recent,mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity.Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems.To guide future research in defending adversarial examples in the real world,we formalize the threat model for practical attacks with adversarial examples,and also analyze the restrictions and key procedures for launching real world adversarial example attacks. 展开更多
关键词 ai systems security Adversarial examples ATTACKS
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部