Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuse...Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuses on imple-menting a model stealing attack on intrusion detection systems.Existing model stealing attacks are hard to imple-ment in practical network environments,as they either need private data of the victim dataset or frequent access to the victim model.In this paper,we propose a novel solution called Fast Model Stealing Attack(FMSA)to address the problem in the field of model stealing attacks.We also highlight the risks of using ML-NIDS in network security.First,meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state.Then,the number of accesses to the target model is used as an optimization term,resulting in minimal queries to achieve model stealing.Finally,adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data.Through experiments on multiple public datasets,compared to existing state-of-the-art algorithms,FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9%and the similarity with the target model to 90.1%.We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries.展开更多
基金supported by Grant Nos.U22A2036,HIT.OCEF.2021007,2020YFB1406902,2020B0101360001.
文摘Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuses on imple-menting a model stealing attack on intrusion detection systems.Existing model stealing attacks are hard to imple-ment in practical network environments,as they either need private data of the victim dataset or frequent access to the victim model.In this paper,we propose a novel solution called Fast Model Stealing Attack(FMSA)to address the problem in the field of model stealing attacks.We also highlight the risks of using ML-NIDS in network security.First,meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state.Then,the number of accesses to the target model is used as an optimization term,resulting in minimal queries to achieve model stealing.Finally,adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data.Through experiments on multiple public datasets,compared to existing state-of-the-art algorithms,FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9%and the similarity with the target model to 90.1%.We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries.