期刊文献+

一种解决连续空间问题的真实在线自然梯度AC算法 被引量:5

True Online Natural Actor-Critic Algorithm for the Continuous Space Problem
下载PDF
导出
摘要 策略梯度作为一种能够有效解决连续空间决策问题的方法得到了广泛研究,但由于在策略估计过程中存在较大方差,因此,基于策略梯度的方法往往受到样本利用率低、收敛速度慢等限制.针对该问题,在行动者-评论家(actor-critic,简称AC)算法框架下,提出了真实在线增量式自然梯度AC(true online incremental natural actor-critic,简称TOINAC)算法.TOINAC算法采用优于传统梯度的自然梯度,在真实在线时间差分(true online time difference,简称TOTD)算法的基础上,提出了一种新型的前向观点,改进了自然梯度行动者-评论家算法.在评论家部分,利用TOTD算法高效性的特点来估计值函数;在行动者部分,引入一种新的前向观点来估计自然梯度,再利用资格迹将自然梯度估计变为在线估计,提高了自然梯度估计的准确性和算法的效率.将TOINAC算法与核方法以及正态策略分布相结合,解决了连续空间问题.最后,在平衡杆、Mountain Car以及Acrobot等连续问题上进行了仿真实验,验证了算法的有效性. Policy gradient methods have been extensively studied as a solution to the continuous space control problem. However, due to the presence of high variance in the gradient estimation, policy gradient based methods are restricted by low sample data utilization and slow convergence. Aiming at solving this problem, utilizing the framework of actor-critic algorithm, a true online incremental natural actor-critic(TOINAC) algorithm, which takes advantage of the natural gradient that is superior to conventional gradient, and is based on true online time difference(TOTD), is proposed. In the critic part of TOINAC algorithm, the efficiency of TOTD is adopted to estimate the value function, and in the actor part of TOINAC algorithm, a novel forward view is introduced to compute and estimate natural gradient. Then, eligibility traces are utilized to turn natural gradient into online estimation, thereby improving the accuracy of natural gradient and efficiency of the method. The TOINAC algorithm is used to integrate with the kernel method and normal distribution policy to tackle the continuous space problem. The simulation tests on cart pole, Mountain Car and Acrobot, which are classical benchmark tests for continuous space problem, verify the effeteness of the algorithm.
出处 《软件学报》 EI CSCD 北大核心 2018年第2期267-282,共16页 Journal of Software
基金 国家自然科学基金(61303108 61373094 61472262) 江苏省高校自然科学研究项目(17KJA520004) 符号计算与知识工程教育部重点实验室(吉林大学)资助项目(93K172014K04) 苏州市应用基础研究计划工业部分(SYG201422) 高校省级重点实验室(苏州大学)项目(KJS1524) 中国国家留学基金(201606920013)~~
关键词 策略梯度 自然梯度 行动者-评论家 真实在线TD 核方法 Policy gradient methods extensively studied continuous space control problem utilized turn natural gradient online estimation thereby improving ccuracy natural gradient
  • 相关文献

参考文献1

二级参考文献16

  • 1Sutton R S,Barto A G. Reinforcement Learning:An Introduction[M].Cambridge,MA:MITPress,1998.
  • 2Sutton R S,Modayil J,Delp M. A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction[A].Richland,SC:International Foundation for Autonomous Agents and Multiagent Systems,2011.761-768.
  • 3Silver D,Sutton R S,Müller M. Temporal-difference search in computer Go[J].Machine Learning,2012,(02):183-219.
  • 4Sutton R S,McAllester D,Singh S. Policy gradient methods for reinforcement learning with function approximation[A].Cambridge,MA:The MIT Press,2000.1057-1063.
  • 5Jan P,Stefan S. Natural actor critic[J].NEUROCOMPUTING,2008,(07):1180-1190.
  • 6Jan P,Vijayakumar S,Stefan S. Reinforcement learning for humanoid robotics[A].Piscataway,NJ:IEEE,2003.1-20.
  • 7Degris T,Pilarski P M,Sutton R S. Model-free reinforcement learning with continuous action in practice[A].Piscataway,NJ:IEEE,2012.2177-2182.
  • 8van Hasselt H,Wiering M. Reinforcement learning in continuous action spaces[A].Piscataway,NJ:IEEE,2007.272-279.
  • 9van Hasselt H. Reinforcement Learning:State of the Art[M].Berlin:Springer-Verlag,2007.207-251.
  • 10Busoniu L,Babuska R,De Schutter B. Reinforcement Learning and Fynamic Programming Using Function Approximators[M].New York:CRC Press,2010.

共引文献8

同被引文献60

引证文献5

二级引证文献18

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部