期刊文献+

一种基于高斯过程的行动者评论家算法 被引量:1

Actor-critic algorithm based on Gaussian process
下载PDF
导出
摘要 强化学习领域的一个研究难点是在大规模或连续空间中平衡探索和利用的问题。针对该问题,应用函数近似与高斯过程方法,提出新的行动者评论家(actor-critic,AC)算法。该算法在actor中使用时间差分误差构造关于策略参数的更新公式;在critic中利用高斯过程对线性带参值函数建模,结合生成模型,根据贝叶斯推理求解值函数的后验分布。将该算法应用于平衡杆实验中,实验结果表明,算法收敛速度较快,可以有效解决在大规模或连续空间中探索和利用的平衡问题,具有较好的性能。 The problem of how to balance the exploration and exploitation in the large or continuous state space is a hot topic in the field of reinforcement learning. With respect to this problem,this paper presented a novel actor-critic algorithm which combined with function approximation method and Gaussian process method. In the terms of actor,the algorithm used the temporal difference error to construct a mean square error function with respect to the policy parameters. In the terms of critic,the algorithm used Gaussian process to model the linear state-value function,and in conjunction with generative model,obtained the posteriori distribution of the parameter vector of the state-value function by Bayesian inference. The experimental results on the balance pole experiment shows that the algorithm has faster convergence rate and achieves the balance between exploration and exploitation in the large or continuous state space effectively. The algorithm has good convergence performance.
出处 《计算机应用研究》 CSCD 北大核心 2016年第6期1670-1675,共6页 Application Research of Computers
基金 国家自然科学基金资助项目(61103045 61272005 61272244 61303108 61373094) 江苏省自然科学基金资助项目(BK2012616) 江苏省高校自然科学研究资助项目(13KJB520020) 吉林大学符号计算与知识工程教育部重点实验室资助项目(93K172014K04)
关键词 强化学习 行动者评论家 高斯过程 贝叶斯推理 连续空间 reinforcement learning actor-critic Gaussian process Bayesian inference continuous space
  • 相关文献

参考文献16

  • 1Sutton R S,Barto A G.Reinforcement learning:an introduction[M].Cambridge:MIT Press,1998.
  • 2Busoniu L,Babuska R,Deschutter B,et al.Reimforcement learning and dynamic programming using function approximators[M].Boca Raton,FL:CRC Press,2010.
  • 3刘全,傅启明,龚声蓉,伏玉琛,崔志明.最小状态变元平均奖赏的强化学习方法[J].通信学报,2011,32(1):66-71. 被引量:15
  • 4Konda V R,Tsitsiklis J N.On actor-critic algorithms[J].SIAM Journal on Control Optim,2003,42(4):1143-1166.
  • 5Rosenstein M T,Barto A G.Supervised learning combined with an actor-critic architecture,TR 02-41[R].[S.l.] :CMPSCI,2002.
  • 6Grondman I,Busoniu L,Lopes G A D,et al.A survey of actor-critic reinforcement learning:standard and natural policy gradients[J].IEEE Trans on Systems,Man,and Cybernetics,Part C:Applications and Reviews,2012,42(6):1291-1307.
  • 7Sutton R S,Mcallester D,Singh S,et al.Policy Gradient methods for reinforcement learning with function approximation[C] //Advances in Neural Information Processing Systems.Cambridge :MIT Press,2000.
  • 8Peters J,Schaal S.Natural actor-critic[J].Neurocomputing,2008,71(7-9):1180-1190.
  • 9Peters J,Vijayakumar S,Schaal S.Reinforcement learning for humanoid robotics[C] //Proc of IEEE-RAS International Conference on Humanoid Robotics.2003.
  • 10Dearden R,Friedman N,Russell S.Bayesian Q-learning[C] // Proc of the 15th National/10th Conference on Artificial Intelligence/ Innovative Applications of Artificial Intelligence.[S.l.] :AAAI Press,1998:761-768.

二级参考文献3

共引文献14

同被引文献5

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部