As one of the most fundamental topics in reinforcement learning(RL),sample efficiency is essential to the deployment of deep RL algorithms.Unlike most existing exploration methods that sample an action from different ...As one of the most fundamental topics in reinforcement learning(RL),sample efficiency is essential to the deployment of deep RL algorithms.Unlike most existing exploration methods that sample an action from different types of posterior distributions,we focus on the policy sampling process and propose an efficient selective sampling approach to improve sample efficiency by modeling the internal hierarchy of the environment.Specifically,we first employ clustering methods in the policy sampling process to generate an action candidate set.Then we introduce a clustering buffer for modeling the internal hierarchy,which consists of on-policy data,off-policy data,and expert data to evaluate actions from the clusters in the action candidate set in the exploration stage.In this way,our approach is able to take advantage of the supervision information in the expert demonstration data.Experiments on six different continuous locomotion environments demonstrate superior reinforcement learning performance and faster convergence of selective sampling.In particular,on the LGSVL task,our method can reduce the number of convergence steps by 46.7%and the convergence time by 28.5%.Furthermore,our code is open-source for reproducibility.The code is available at https://github.com/Shihwin/SelectiveSampling.展开更多
基金supported by the National Natural Science Foundation of China (No.62176059)the Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01)Zhangjiang Lab,and the Shanghai Center for Brain Science and Brain-inspired Technology。
文摘As one of the most fundamental topics in reinforcement learning(RL),sample efficiency is essential to the deployment of deep RL algorithms.Unlike most existing exploration methods that sample an action from different types of posterior distributions,we focus on the policy sampling process and propose an efficient selective sampling approach to improve sample efficiency by modeling the internal hierarchy of the environment.Specifically,we first employ clustering methods in the policy sampling process to generate an action candidate set.Then we introduce a clustering buffer for modeling the internal hierarchy,which consists of on-policy data,off-policy data,and expert data to evaluate actions from the clusters in the action candidate set in the exploration stage.In this way,our approach is able to take advantage of the supervision information in the expert demonstration data.Experiments on six different continuous locomotion environments demonstrate superior reinforcement learning performance and faster convergence of selective sampling.In particular,on the LGSVL task,our method can reduce the number of convergence steps by 46.7%and the convergence time by 28.5%.Furthermore,our code is open-source for reproducibility.The code is available at https://github.com/Shihwin/SelectiveSampling.