摘要
在线学习时长是强化学习算法的一个重要指标.传统在线强化学习算法如Q学习、状态–动作–奖励–状态–动作(state-action-reward-state-action,SARSA)等算法不能从理论分析角度给出定量的在线学习时长上界.本文引入概率近似正确(probably approximately correct,PAC)原理,为连续时间确定性系统设计基于数据的在线强化学习算法.这类算法有效记录在线数据,同时考虑强化学习算法对状态空间探索的需求,能够在有限在线学习时间内输出近似最优的控制.我们提出算法的两种实现方式,分别使用状态离散化和kd树(k-dimensional树)技术,存储数据和计算在线策略.最后我们将提出的两个算法应用在双连杆机械臂运动控制上,观察算法的效果并进行比较.
One important factor of reinforcement learning (RL) algorithms is the online learning time. Conventional algorithms such Q-learning and state-action-reward-state-action (SARSA) can not give the quantitative analysis on the upper bound of the online learning time. In this paper, we employ the idea of probably approximately correct (PAC) and design the data-driven online RL algorithm for continuous-time deterministic systems. This class of algorithms efficiently record online observations and keep in mind the exploration required by online RL. They are capable to learn the nearoptimal policy within a finite time length. Two algorithms are developed, separately based on state discretization and kd-tree technique, which are used to store data and compute online policies. Both algorithms are applied to the two-linkmanipulator to observe the performance.
作者
朱圆恒
赵冬斌
ZHU Yuan-heng;ZHAO Dong-bin(State Key Laboratory of Management and Control for Complex Systems, Institution of Automation,Chinese Academy of Sciences, Beijing 100190, China)
出处
《控制理论与应用》
EI
CAS
CSCD
北大核心
2016年第12期1603-1613,共11页
Control Theory & Applications
基金
国家自然科学基金项目(61273136
61573353
61533017
61603382)
复杂系统管理与控制国家重点实验室优秀人才基金项目资助~~
关键词
强化学习
概率近似正确
KD树
双连杆机械臂
reinforcement learning
probably approximately correct
kd-tree
two-link manipulator