Reinforcement learning(RL),one of three branches of machine learning,aims for autonomous learning and is now greatly driving the artificial intelligence development,especially in autonomous distributed systems,such as...Reinforcement learning(RL),one of three branches of machine learning,aims for autonomous learning and is now greatly driving the artificial intelligence development,especially in autonomous distributed systems,such as cooperative Boston Dynamics robots.However,robust RL has been a challenging problem of reliable aspects due to the gap between laboratory simulation and real world.Existing efforts have been made to approach this problem,such as performing random environmental perturbations in the learning process.However,one cannot guarantee to train with a positive perturbation as bad ones might bring failures to RL.In this work,we treat robust RL as a multi-task RL problem,and propose a curricular robust RL approach.We first present a generative adversarial network(GAN)based task generation model to iteratively output new tasks at the appropriate level of difficulty for the current policy.Furthermore,with these progressive tasks,we can realize curricular learning and finally obtain a robust policy.Extensive experiments in multiple environments demonstrate that our method improves the training stability and is robust to differences in training/test conditions.展开更多
Learning from imbalanced data is a challenging task in a wide range of applications, which attracts significant research efforts from machine learning and data mining community. As a natural approach to this issue, ov...Learning from imbalanced data is a challenging task in a wide range of applications, which attracts significant research efforts from machine learning and data mining community. As a natural approach to this issue, oversampling balances the training samples through replicating existing samples or synthesizing new samples. In general, synthesization outperforms replication by supplying additional information on the minority class. However, the additional information needs to follow the same normal distribution of the training set, which further constrains the new samples within the predefined range of training set. In this paper, we present the Wiener process oversampling (WPO) technique that brings the physics phenomena into sample synthesization. WPO constructs a robust decision region by expanding the attribute ranges in training set while keeping the same normal distribution. The satisfactory performance of WPO can be achieved with much lower computing complexity. In addition, by integrating WPO with ensemble learning, the WPOBoost algorithm outperforms many prevalent imbalance learning solutions.展开更多
Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a relia...Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a reliable RL system is the foundation for the security critical applications in AI,which has attracted a concern that is more critical than ever.However,recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning,which has inspired innovative researches in this direction.Hence,in this paper,we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security.Moreover,we give briefly introduction on the most representative defense technologies against existing adversarial attacks.展开更多
Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a relia...Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a reliable RL system is the foundation for the security critical applications in AI,which has attracted a concern that is more critical than ever.However,recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning,which has inspired innovative researches in this direction.Hence,in this paper,we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security.Moreover,we give briefly introduction on the most representative defense technologies against existing adversarial attacks.展开更多
基金supported by the National Natural Science Foundation of China (Nos.61972025,61802389,61672092,U1811264,and 61966009)the National Key R&D Program of China (Nos.2020YFB1005604 and 2020YFB2103802).
文摘Reinforcement learning(RL),one of three branches of machine learning,aims for autonomous learning and is now greatly driving the artificial intelligence development,especially in autonomous distributed systems,such as cooperative Boston Dynamics robots.However,robust RL has been a challenging problem of reliable aspects due to the gap between laboratory simulation and real world.Existing efforts have been made to approach this problem,such as performing random environmental perturbations in the learning process.However,one cannot guarantee to train with a positive perturbation as bad ones might bring failures to RL.In this work,we treat robust RL as a multi-task RL problem,and propose a curricular robust RL approach.We first present a generative adversarial network(GAN)based task generation model to iteratively output new tasks at the appropriate level of difficulty for the current policy.Furthermore,with these progressive tasks,we can realize curricular learning and finally obtain a robust policy.Extensive experiments in multiple environments demonstrate that our method improves the training stability and is robust to differences in training/test conditions.
基金Acknowledgements This research was partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA06030200), the National Natural Science Foundation of China (Grant Nos. M1552006, 61403369, 61272427, and 61363030), Xinjiang Uygur Autonomous Region Science and Technology Project (201230123), Beijing Key Lab of Intelligent Telecommunication Software, Multimedia (ITSM201502), Guangxi Key Laboratory of Trusted Software (kx201418).
文摘Learning from imbalanced data is a challenging task in a wide range of applications, which attracts significant research efforts from machine learning and data mining community. As a natural approach to this issue, oversampling balances the training samples through replicating existing samples or synthesizing new samples. In general, synthesization outperforms replication by supplying additional information on the minority class. However, the additional information needs to follow the same normal distribution of the training set, which further constrains the new samples within the predefined range of training set. In this paper, we present the Wiener process oversampling (WPO) technique that brings the physics phenomena into sample synthesization. WPO constructs a robust decision region by expanding the attribute ranges in training set while keeping the same normal distribution. The satisfactory performance of WPO can be achieved with much lower computing complexity. In addition, by integrating WPO with ensemble learning, the WPOBoost algorithm outperforms many prevalent imbalance learning solutions.
基金This research is supported by the National Natural Science Foundation of China(No.61672092)Science and Technology on Information Assurance Laboratory(No.614200103011711)+2 种基金the Project(No.BMK2017B02-2)Beijing Excellent Talent Training Project,the Fundamental Research Funds for the Central Universities(No.2017RC016)the Foundation of China Scholarship Council,the Fundamental Research Funds for the Central Universities of China under Grants 2018JBZ103.
文摘Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a reliable RL system is the foundation for the security critical applications in AI,which has attracted a concern that is more critical than ever.However,recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning,which has inspired innovative researches in this direction.Hence,in this paper,we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security.Moreover,we give briefly introduction on the most representative defense technologies against existing adversarial attacks.
基金supported by the National Natural Science Foundation of China(No.61672092)Science and Technology on Information Assurance Laboratory(No.614200103011711)+4 种基金the Project(No.BMK2017B02-2)Beijing Excellent Talent Training Projectthe Fundamental Research Funds for the Central Universities(No.2017RC016)the Foundation of China Scholarship Councilthe Fundamental Research Funds for the Central Universities of China under Grants 2018JBZ103.
文摘Reinforcement learning is a core technology for modern artificial intelligence,and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System(CAV).Therefore,a reliable RL system is the foundation for the security critical applications in AI,which has attracted a concern that is more critical than ever.However,recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning,which has inspired innovative researches in this direction.Hence,in this paper,we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security.Moreover,we give briefly introduction on the most representative defense technologies against existing adversarial attacks.