摘要
针对采用折扣奖赏作为评价目标的Q学习无法体现对后续动作的影响问题,提出将平均奖赏和Q学习相结合的AR-Q-Learning算法,并进行收敛性证明。针对学习参数个数随着状态变量维数呈几何级增长的"维数灾"问题,提出最小状态变元的思想。将最小变元思想和平均奖赏用于积木世界的强化学习中,试验结果表明,该方法更具有后效性,加快算法的收敛速度,同时在一定程度上解决积木世界中的"维数灾"问题。
In allusion to the problem that Q-Learning,which was used discount reward as the evaluation criterion,could not show the affect of the action to the next situation,AR-Q-Learning was put forward based on the average reward and Q-Learning.In allusion to the curse of dimensionality,which meant that the computational requirement grew exponen-tially with the number of the state variable.Minimum state method was put forward.AR-Q-Learning and minimum state method were used in reinforcement learning for Blocks World,and the result of the experiment shows that the method has the characteristic of aftereffect and converges more faster than Q-Learning,and at the same time,solve the curse of di-mensionality in a certain extent in Blocks World.
出处
《通信学报》
EI
CSCD
北大核心
2011年第1期66-71,共6页
Journal on Communications
基金
国家自然科学基金资助项目(60873116
61070223
61070122)
江苏省自然科学基金资助项目(BK2008161
BK2009116)
江苏省高校自然科学研究基金资助项目(09KJA520002)
江苏省现代企业信息化应用支撑软件工程技术研究开发中心基金资助项目(SX200804)~~