期刊文献+
共找到510篇文章
< 1 2 26 >
每页显示 20 50 100
Adversarial attacks and defenses for digital communication signals identification
1
作者 Qiao Tian Sicheng Zhang +1 位作者 Shiwen Mao Yun Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第3期756-764,共9页
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ... As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research. 展开更多
关键词 Digital communication signals identification AI model adversarial attacks adversarial defenses adversarial indicators
下载PDF
Attention-Guided Sparse Adversarial Attacks with Gradient Dropout
2
作者 ZHAO Hongzhi HAO Lingguang +2 位作者 HAO Kuangrong WEI Bing LIU Xiaoyan 《Journal of Donghua University(English Edition)》 CAS 2024年第5期545-556,共12页
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att... Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation. 展开更多
关键词 deep neural network adversarial attack sparse adversarial attack adversarial transferability adversarial example
下载PDF
LDAS&ET-AD:Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation
3
作者 Shuyi Li Hongchao Hu +3 位作者 Xiaohan Yang Guozhen Cheng Wenyan Liu Wei Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期2331-2359,共29页
Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric atta... Adversarial distillation(AD)has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training.However,fixed sample-agnostic and student-egocentric attack strategies are unsuitable for distillation.Additionally,the reliability of guidance from static teachers diminishes as target models become more robust.This paper proposes an AD method called Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation(LDAS&ET-AD).Firstly,a learnable distillation attack strategies generating mechanism is developed to automatically generate sample-dependent attack strategies tailored for distillation.A strategy model is introduced to produce attack strategies that enable adversarial examples(AEs)to be created in areas where the target model significantly diverges from the teachers by competing with the target model in minimizing or maximizing the AD loss.Secondly,a teacher evolution strategy is introduced to enhance the reliability and effectiveness of knowledge in improving the generalization performance of the target model.By calculating the experimentally updated target model’s validation performance on both clean samples and AEs,the impact of distillation from each training sample and AE on the target model’s generalization and robustness abilities is assessed to serve as feedback to fine-tune standard and robust teachers accordingly.Experiments evaluate the performance of LDAS&ET-AD against different adversarial attacks on the CIFAR-10 and CIFAR-100 datasets.The experimental results demonstrate that the proposed method achieves a robust precision of 45.39%and 42.63%against AutoAttack(AA)on the CIFAR-10 dataset for ResNet-18 and MobileNet-V2,respectively,marking an improvement of 2.31%and 3.49%over the baseline method.In comparison to state-of-the-art adversarial defense techniques,our method surpasses Introspective Adversarial Distillation,the top-performing method in terms of robustness under AA attack for the CIFAR-10 dataset,with enhancements of 1.40%and 1.43%for ResNet-18 and MobileNet-V2,respectively.These findings demonstrate the effectiveness of our proposed method in enhancing the robustness of deep learning networks(DNNs)against prevalent adversarial attacks when compared to other competing methods.In conclusion,LDAS&ET-AD provides reliable and informative soft labels to one of the most promising defense methods,AT,alleviating the limitations of untrusted teachers and unsuitable AEs in existing AD techniques.We hope this paper promotes the development of DNNs in real-world trust-sensitive fields and helps ensure a more secure and dependable future for artificial intelligence systems. 展开更多
关键词 adversarial training adversarial distillation learnable distillation attack strategies teacher evolution strategy
下载PDF
MaliFuzz:Adversarial Malware Detection Model for Defending Against Fuzzing Attack
4
作者 Xianwei Gao Chun Shan Changzhen Hu 《Journal of Beijing Institute of Technology》 EI CAS 2024年第5期436-449,共14页
With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers c... With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects. 展开更多
关键词 adversarial machine learning fuzzing attack malware detection
下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:19
5
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network adversarial example adversarial attack adversarial defense
下载PDF
Primary User Adversarial Attacks on Deep Learning-Based Spectrum Sensing and the Defense Method 被引量:4
6
作者 Shilian Zheng Linhui Ye +5 位作者 Xuanye Wang Jinyin Chen Huaji Zhou Caiyi Lou Zhijin Zhao Xiaoniu Yang 《China Communications》 SCIE CSCD 2021年第12期94-107,共14页
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob... The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model. 展开更多
关键词 spectrum sensing cognitive radio deep learning adversarial attack autoencoder DEFENSE
下载PDF
Adversarial Attacks on Content-Based Filtering Journal Recommender Systems 被引量:4
7
作者 Zhaoquan Gu Yinyin Cai +5 位作者 Sheng Wang Mohan Li Jing Qiu Shen Su Xiaojiang Du Zhihong Tian 《Computers, Materials & Continua》 SCIE EI 2020年第9期1755-1770,共16页
Recommender systems are very useful for people to explore what they really need.Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers.In order to ... Recommender systems are very useful for people to explore what they really need.Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers.In order to improve the efficiency of selecting the most suitable journals for publishing their works,journal recommender systems(JRS)can automatically provide a small number of candidate journals based on key information such as the title and the abstract.However,users or journal owners may attack the system for their own purposes.In this paper,we discuss about the adversarial attacks against content-based filtering JRS.We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations.We also conduct extensive experiments to validate the proposed methods.We hope this paper could help improve JRS by realizing the existence of such adversarial attacks. 展开更多
关键词 Journal recommender system adversarial attacks Rocchio algorithm k-nearest-neighbor algorithm
下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
8
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
下载PDF
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks 被引量:1
9
作者 Kazim Ali Adnan N.Quershi +3 位作者 Ahmad Alauddin Bin Arifin Muhammad Shahid Bhatti Abid Sohail Rohail Hassan 《Computers, Materials & Continua》 SCIE EI 2022年第5期2209-2224,共16页
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li... These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods. 展开更多
关键词 Computer vision deep learning convolutional neural networks adversarial examples adversarial attacks adversarial defenses
下载PDF
Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems 被引量:1
10
作者 Muhammad Shahzad Haroon Husnain Mansoor Ali 《Computers, Materials & Continua》 SCIE EI 2022年第11期3513-3527,共15页
Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,i... Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score. 展开更多
关键词 Intrusion detection system adversarial attacks adversarial training adversarial machine learning
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks 被引量:1
11
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Kernel-based adversarial attacks and defenses on support vector classification 被引量:1
12
作者 Wanman Li Xiaozhang Liu +1 位作者 Anli Yan Jie Yang 《Digital Communications and Networks》 SCIE CSCD 2022年第4期492-497,共6页
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity... While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme. 展开更多
关键词 adversarial machine learning Support vector machines Evasion attack Vulnerability function Kernel optimization
下载PDF
A Study on Filter-Based Adversarial Image Classification Models
13
作者 Zhongcheng Zhao 《Journal of Contemporary Educational Research》 2024年第11期245-256,共12页
In view of the fact that adversarial examples can lead to high-confidence erroneous outputs of deep neural networks,this study aims to improve the safety of deep neural networks by distinguishing adversarial examples.... In view of the fact that adversarial examples can lead to high-confidence erroneous outputs of deep neural networks,this study aims to improve the safety of deep neural networks by distinguishing adversarial examples.A classification model based on filter residual network structure is used to accurately classify adversarial examples.The filter-based classification model includes residual network feature extraction and classification modules,which are iteratively optimized by an adversarial training strategy.Three mainstream adversarial attack methods are improved,and adversarial samples are generated on the Mini-ImageNet dataset.Subsequently,these samples are used to attack the EfficientNet and the filter-based classification model respectively,and the attack effects are compared.Experimental results show that the filter-based classification model has high classification accuracy when dealing with Mini-ImageNet adversarial examples.Adversarial training can effectively enhance the robustness of deep neural network models. 展开更多
关键词 adversarial example Image classification adversarial attack
下载PDF
DroidEnemy: Battling adversarial example attacks for Android malware detection
14
作者 Neha Bala Aemun Ahmar +3 位作者 Wenjia Li Fernanda Tovar Arpit Battu Prachi Bambarkar 《Digital Communications and Networks》 SCIE CSCD 2022年第6期1040-1047,共8页
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are... In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks. 展开更多
关键词 Security Malware detection adversarial example attack Data poisoning attack Evasi on attack Machine learning ANDROID
下载PDF
Local imperceptible adversarial attacks against human pose estimation networks
15
作者 Fuchang Liu Shen Zhang +2 位作者 Hao Wang Caiping Yan Yongwei Miao 《Visual Computing for Industry,Biomedicine,and Art》 EI 2023年第1期318-328,共11页
Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classif... Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classification-based attack methods to body joint regression tasks is not straightforward.Another issue is that the attack effectiveness and imperceptibility contradict each other.To solve these issues,we propose local imperceptible attacks on HPE networks.In particular,we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack.Furthermore,we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection.Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness.We conducted a series of imperceptible attacks against state-of-the-art HPE methods,including HigherHRNet,DEKR,and ViTPose.The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels.Approximately 4%of the pixels can achieve sufficient attacks on HPE. 展开更多
关键词 adversarial attack Human pose estimation White-box attack IMPERCEPTIBILITY Local perturbation
下载PDF
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
16
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 adversarial attacks face aerification adversarial detection perturbation removal
下载PDF
Adversarial Attack-Based Robustness Evaluation for Trustworthy AI
17
作者 Eungyu Lee Yongsoo Lee Taejin Lee 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1919-1935,共17页
Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and r... Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training. 展开更多
关键词 AI ROBUSTNESS adversarial attack adversarial robustness robustness indicator trustworthy AI
下载PDF
Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection
18
作者 Bader Rasheed Adil Khan +3 位作者 S.M.Ahsan Kazmi Rasheed Hussain Md.Jalil Piran Doug Young Suh 《Computers, Materials & Continua》 SCIE EI 2021年第7期921-939,共19页
Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious... Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious URLs.By using ML algorithms,rst,the features of URLs are extracted,and then different ML models are trained.The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL.Therefore,deep learning(DL)models are used to solve these issues since they are able to perform featureless detection.Furthermore,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial attacks.In this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions.We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassication.The attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average. 展开更多
关键词 Malicious URLs DETECTION deep learning adversarial attack web security
下载PDF
Classification of Adversarial Attacks Using Ensemble Clustering Approach
19
作者 Pongsakorn Tatongjai Tossapon Boongoen +2 位作者 Natthakan Iam-On Nitin Naik Longzhi Yang 《Computers, Materials & Continua》 SCIE EI 2023年第2期2479-2498,共20页
As more business transactions and information services have been implemented via communication networks,both personal and organization assets encounter a higher risk of attacks.To safeguard these,a perimeter defence l... As more business transactions and information services have been implemented via communication networks,both personal and organization assets encounter a higher risk of attacks.To safeguard these,a perimeter defence likeNIDS(network-based intrusion detection system)can be effective for known intrusions.There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks,where obfuscation techniques are applied to disguise patterns of intrusive traffics.The current research focuses on non-payload connections at the TCP(transmission control protocol)stack level that is applicable to different network applications.In contrary to the wrapper method introduced with the benchmark dataset,three new filter models are proposed to transform the feature space without knowledge of class labels.These ECT(ensemble clustering based transformation)techniques,i.e.,ECT-Subspace,ECT-Noise and ECT-Combined,are developed using the concept of ensemble clustering and three different ensemble generation strategies,i.e.,random feature subspace,feature noise injection and their combinations.Based on the empirical study with published dataset and four classification algorithms,new models usually outperform that original wrapper and other filter alternatives found in the literature.This is similarly summarized from the first experiment with basic classification of legitimate and direct attacks,and the second that focuses on recognizing obfuscated intrusions.In addition,analysis of algorithmic parameters,i.e.,ensemble size and level of noise,is provided as a guideline for a practical use. 展开更多
关键词 Intrusion detection adversarial attack machine learning feature transformation ensemble clustering
下载PDF
An Efficient Character-Level Adversarial Attack Inspired by Textual Variations in Online Social Media Platforms
20
作者 Jebran Khan Kashif Ahmad Kyung-Ah Sohn 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期2869-2894,共26页
In recent years,the growing popularity of social media platforms has led to several interesting natural language processing(NLP)applications.However,these social media-based NLP applications are subject to different t... In recent years,the growing popularity of social media platforms has led to several interesting natural language processing(NLP)applications.However,these social media-based NLP applications are subject to different types of adversarial attacks due to the vulnerabilities of machine learning(ML)and NLP techniques.This work presents a new low-level adversarial attack recipe inspired by textual variations in online social media communication.These variations are generated to convey the message using out-of-vocabulary words based on visual and phonetic similarities of characters and words in the shortest possible form.The intuition of the proposed scheme is to generate adversarial examples influenced by human cognition in text generation on social media platforms while preserving human robustness in text understanding with the fewest possible perturbations.The intentional textual variations introduced by users in online communication motivate us to replicate such trends in attacking text to see the effects of such widely used textual variations on the deep learning classifiers.In this work,the four most commonly used textual variations are chosen to generate adversarial examples.Moreover,this article introduced a word importance ranking-based beam search algorithm as a searching method for the best possible perturbation selection.The effectiveness of the proposed adversarial attacks has been demonstrated on four benchmark datasets in an extensive experimental setup. 展开更多
关键词 adversarial attack text classification social media character-level attack phonetic similarity visual similarity word importance rank beam search
下载PDF
上一页 1 2 26 下一页 到第
使用帮助 返回顶部