While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity...While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.展开更多
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ...Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.展开更多
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble...Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity.展开更多
Empirical attacks on Federated Learning(FL)systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution.These attacks can not only cause models to fail in specific tasks,but also infer...Empirical attacks on Federated Learning(FL)systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution.These attacks can not only cause models to fail in specific tasks,but also infer private information.While previous surveys have identified the risks,listed the attack methods available in the literature or provided a basic taxonomy to classify them,they mainly focused on the risks in the training phase of FL.In this work,we survey the threats,attacks and defenses to FL throughout the whole process of FL in three phases,including Data and Behavior Auditing Phase,Training Phase and Predicting Phase.We further provide a comprehensive analysis of these threats,attacks and defenses,and summarize their issues and taxonomy.Our work considers security and privacy of FL based on the viewpoint of the execution process of FL.We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase.Finally,we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.61966011.
文摘While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)Grant funded by the Korea government,Ministry of Science and ICT(MSIT)(No.2017-0-00168,Automatic Deep Malware Analysis Technology for Cyber Threat Intelligence).
文摘Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.
文摘Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity.
基金This work was supported in part by National Key R&D Program of China,under Grant 2020YFB2103802in part by the National Natural Science Foundation of China,uder grant U21A20463in part by the Fundamental Research Funds for the Central Universities of China under Grant KKJB320001536.
文摘Empirical attacks on Federated Learning(FL)systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution.These attacks can not only cause models to fail in specific tasks,but also infer private information.While previous surveys have identified the risks,listed the attack methods available in the literature or provided a basic taxonomy to classify them,they mainly focused on the risks in the training phase of FL.In this work,we survey the threats,attacks and defenses to FL throughout the whole process of FL in three phases,including Data and Behavior Auditing Phase,Training Phase and Predicting Phase.We further provide a comprehensive analysis of these threats,attacks and defenses,and summarize their issues and taxonomy.Our work considers security and privacy of FL based on the viewpoint of the execution process of FL.We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase.Finally,we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL.