摘要
策略梯度作为一种能够有效解决连续空间决策问题的方法得到了广泛研究,但由于在策略估计过程中存在较大方差,因此,基于策略梯度的方法往往受到样本利用率低、收敛速度慢等限制.针对该问题,在行动者-评论家(actor-critic,简称AC)算法框架下,提出了真实在线增量式自然梯度AC(true online incremental natural actor-critic,简称TOINAC)算法.TOINAC算法采用优于传统梯度的自然梯度,在真实在线时间差分(true online time difference,简称TOTD)算法的基础上,提出了一种新型的前向观点,改进了自然梯度行动者-评论家算法.在评论家部分,利用TOTD算法高效性的特点来估计值函数;在行动者部分,引入一种新的前向观点来估计自然梯度,再利用资格迹将自然梯度估计变为在线估计,提高了自然梯度估计的准确性和算法的效率.将TOINAC算法与核方法以及正态策略分布相结合,解决了连续空间问题.最后,在平衡杆、Mountain Car以及Acrobot等连续问题上进行了仿真实验,验证了算法的有效性.
Policy gradient methods have been extensively studied as a solution to the continuous space control problem. However, due to the presence of high variance in the gradient estimation, policy gradient based methods are restricted by low sample data utilization and slow convergence. Aiming at solving this problem, utilizing the framework of actor-critic algorithm, a true online incremental natural actor-critic(TOINAC) algorithm, which takes advantage of the natural gradient that is superior to conventional gradient, and is based on true online time difference(TOTD), is proposed. In the critic part of TOINAC algorithm, the efficiency of TOTD is adopted to estimate the value function, and in the actor part of TOINAC algorithm, a novel forward view is introduced to compute and estimate natural gradient. Then, eligibility traces are utilized to turn natural gradient into online estimation, thereby improving the accuracy of natural gradient and efficiency of the method. The TOINAC algorithm is used to integrate with the kernel method and normal distribution policy to tackle the continuous space problem. The simulation tests on cart pole, Mountain Car and Acrobot, which are classical benchmark tests for continuous space problem, verify the effeteness of the algorithm.
出处
《软件学报》
EI
CSCD
北大核心
2018年第2期267-282,共16页
Journal of Software
基金
国家自然科学基金(61303108
61373094
61472262)
江苏省高校自然科学研究项目(17KJA520004)
符号计算与知识工程教育部重点实验室(吉林大学)资助项目(93K172014K04)
苏州市应用基础研究计划工业部分(SYG201422)
高校省级重点实验室(苏州大学)项目(KJS1524)
中国国家留学基金(201606920013)~~
关键词
策略梯度
自然梯度
行动者-评论家
真实在线TD
核方法
Policy gradient methods
extensively studied
continuous space control problem
utilized
turn natural gradient
online estimation
thereby improving
ccuracy
natural gradient