摘要
针对移动智能体在未知环境下的路径规划问题,提出了基于探索-利用权衡优化的Q学习路径规划。对强化学习方法中固有的探索-利用权衡问题,提出了探索贪婪系数ε值随学习幕数平滑衰减的εDBE(ε-decreasing based episodes)方法和根据Q表中的状态动作值判断到达状态的陌生/熟悉程度、做出探索或利用选择的AεBS(adaptive ε based state)方法,这一改进确定了触发探索和触发利用的情况,避免探索过度和利用过度,能加快找到最优路径。在未知环境下对基于探索-利用权衡优化的Q学习路径规划与经典的Q学习路径规划进行仿真实验比较,结果表明该方法的智能体在未知障碍环境情况下具有快速学习适应的特性,最优路径步数收敛速度更快,能更高效实现路径规划,验证了该方法的可行性和高效性。
Aiming at the path planning problem of mobile agent in unknown environment,a Q-learning path planning based on exploration/exploitation tradeoff optimization is proposed.For the inherent problem of exploration/exploitation tradeoff in reinforcement learning,the εDBE(ε-decreasing based episodes) method of exploring greedy coefficient ε value decreasing smoothly with the number of learning episodes and the AεBS(adaptiveεbased state)method of judging strangeness/familiarity of arriving state and making exploration or exploitation selection according to the state action value in Q table are proposed.This improvement determines the situation of triggering exploration or triggering exploitation,avoids over exploration and over exploitation,and can speed up finding the optimal path.In unknown environment,the Q-learning path planning based on exploration/exploitation tradeoff optimization is compared with the classical Q-learning path planning.The simulation results show that the agent with the proposed method has the characteristics of fast learning and adaptation in the unknown obstacle environment,the optimal path steps converge faster,and can realize the path planning more efficiently.The feasibility and efficiency of the proposed method are verified.
作者
彭云建
梁进
PENG Yun-jian;LIANG Jin(School of Automation Science and Engineering,South China University of Technology,Guangzhou 510640,China)
出处
《计算机技术与发展》
2022年第4期1-7,共7页
Computer Technology and Development
基金
国家自然科学基金(61573154)。
关键词
强化学习
Q学习
探索-利用
路径规划
未知环境
reinforcement learning
Q-learning
exploration/exploitation
path planning
unknown environment