期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Kernel-based adversarial attacks and defenses on support vector classification 被引量:1
1
作者 Wanman Li Xiaozhang Liu +1 位作者 Anli Yan Jie Yang 《Digital Communications and Networks》 SCIE CSCD 2022年第4期492-497,共6页
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity... While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme. 展开更多
关键词 Adversarial machine learning Support vector machines evasion attack Vulnerability function Kernel optimization
下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
2
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
下载PDF
Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms
3
作者 Muhammad Tayyab Mohsen Marjani +3 位作者 N.Z.Jhanjhi Ibrahim Abaker Targio Hashim Abdulwahab Ali Almazroi Abdulaleem Ali Almazroi 《Computers, Materials & Continua》 SCIE EI 2021年第10期1183-1200,共18页
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble... Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity. 展开更多
关键词 Deep learning(DL) poisoning attacks evasion attacks neural network hash functions SHA512 homomorphic encryption scheme
下载PDF
Threats,attacks and defenses to federated learning:issues,taxonomy and perspectives 被引量:4
4
作者 Pengrui Liu Xiangrui Xu Wei Wang 《Cybersecurity》 EI CSCD 2022年第2期56-74,共19页
Empirical attacks on Federated Learning(FL)systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution.These attacks can not only cause models to fail in specific tasks,but also infer... Empirical attacks on Federated Learning(FL)systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution.These attacks can not only cause models to fail in specific tasks,but also infer private information.While previous surveys have identified the risks,listed the attack methods available in the literature or provided a basic taxonomy to classify them,they mainly focused on the risks in the training phase of FL.In this work,we survey the threats,attacks and defenses to FL throughout the whole process of FL in three phases,including Data and Behavior Auditing Phase,Training Phase and Predicting Phase.We further provide a comprehensive analysis of these threats,attacks and defenses,and summarize their issues and taxonomy.Our work considers security and privacy of FL based on the viewpoint of the execution process of FL.We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase.Finally,we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL. 展开更多
关键词 Federated learning Security and privacy threats Multi-phases Inference attacks Poisoning attacks evasion attacks DEFENSES TRUSTED
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部