Nowadays,large numbers of smart sensors(e.g.,road-side cameras)which com-municate with nearby base stations could launch distributed denial of services(DDoS)attack storms in intelligent transportation systems.DDoS att...Nowadays,large numbers of smart sensors(e.g.,road-side cameras)which com-municate with nearby base stations could launch distributed denial of services(DDoS)attack storms in intelligent transportation systems.DDoS attacks disable the services provided by base stations.Thus in this paper,considering the uneven communication traffic ows and privacy preserving,we give a hidden Markov model-based prediction model by utilizing the multi-step characteristic of DDoS with a federated learning framework to predict whether DDoS attacks will happen on base stations in the future.However,in the federated learning,we need to consider the problem of poisoning attacks due to malicious participants.The poisoning attacks will lead to the intelligent transportation systems paralysis without security protection.Traditional poisoning attacks mainly apply to the classi cation model with labeled data.In this paper,we propose a reinforcement learning-based poisoningmethod speci cally for poisoning the prediction model with unlabeled data.Besides,previous related defense strategies rely on validation datasets with labeled data in the server.However,it is unrealistic since the local training datasets are not uploaded to the server due to privacy preserving,and our datasets are also unlabeled.Furthermore,we give a validation dataset-free defense strategy based on Dempster-Shafer(D-S)evidence theory avoiding anomaly aggregation to obtain a robust global model for precise DDoS prediction.In our experiments,we simulate 3000 points in combination with DARPA2000 dataset to carry out evaluations.The results indicate that our poisoning method can successfully poison the global prediction model with unlabeled data in a short time.Meanwhile,we compare our proposed defense algorithm with three popularly used defense algorithms.The results show that our defense method has a high accuracy rate of excluding poisoners and can obtain a high attack prediction probability.展开更多
基金the National Key Research and Development Project(2018YFB2100801)in part by the National Natural Science Foundation of China(61972080)+1 种基金in part by the Shanghai Rising-Star Program(19QA1400300)in part by the Open Research Project from the Key Laboratory of the Ministry of Education for Embedded System and Service Computing(ESSCKF2021-01).
文摘Nowadays,large numbers of smart sensors(e.g.,road-side cameras)which com-municate with nearby base stations could launch distributed denial of services(DDoS)attack storms in intelligent transportation systems.DDoS attacks disable the services provided by base stations.Thus in this paper,considering the uneven communication traffic ows and privacy preserving,we give a hidden Markov model-based prediction model by utilizing the multi-step characteristic of DDoS with a federated learning framework to predict whether DDoS attacks will happen on base stations in the future.However,in the federated learning,we need to consider the problem of poisoning attacks due to malicious participants.The poisoning attacks will lead to the intelligent transportation systems paralysis without security protection.Traditional poisoning attacks mainly apply to the classi cation model with labeled data.In this paper,we propose a reinforcement learning-based poisoningmethod speci cally for poisoning the prediction model with unlabeled data.Besides,previous related defense strategies rely on validation datasets with labeled data in the server.However,it is unrealistic since the local training datasets are not uploaded to the server due to privacy preserving,and our datasets are also unlabeled.Furthermore,we give a validation dataset-free defense strategy based on Dempster-Shafer(D-S)evidence theory avoiding anomaly aggregation to obtain a robust global model for precise DDoS prediction.In our experiments,we simulate 3000 points in combination with DARPA2000 dataset to carry out evaluations.The results indicate that our poisoning method can successfully poison the global prediction model with unlabeled data in a short time.Meanwhile,we compare our proposed defense algorithm with three popularly used defense algorithms.The results show that our defense method has a high accuracy rate of excluding poisoners and can obtain a high attack prediction probability.