期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Model gradient: unified model and policy learning in model-based reinforcement learning
1
作者 Chengxing JIA Fuxiang ZHANG +3 位作者 Tian XU jing-cheng pang Zongzhang ZHANG Yang YU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期117-128,共12页
Model-based reinforcement learning is a promising direction to improve the sample efficiency of reinforcement learning with learning a model of the environment.Previous model learning methods aim at fitting the transi... Model-based reinforcement learning is a promising direction to improve the sample efficiency of reinforcement learning with learning a model of the environment.Previous model learning methods aim at fitting the transition data,and commonly employ a supervised learning approach to minimize the distance between the predicted state and the real state.The supervised model learning methods,however,diverge from the ultimate goal of model learning,i.e.,optimizing the learned-in-the-model policy.In this work,we investigate how model learning and policy learning can share the same objective of maximizing the expected return in the real environment.We find model learning towards this objective can result in a target of enhancing the similarity between the gradient on generated data and the gradient on the real data.We thus derive the gradient of the model from this target and propose the Model Gradient algorithm(MG)to integrate this novel model learning approach with policy-gradient-based policy optimization.We conduct experiments on multiple locomotion control tasks and find that MG can not only achieve high sample efficiency but also lead to better convergence performance compared to traditional model-based reinforcement learning approaches. 展开更多
关键词 reinforcement learning model-based reinforcement learning Markov decision process
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部