摘要
针对强化学习算法收敛速度慢、奖赏函数的设计需要改进的问题,提出一种新的强化学习算法.新算法使用行动分值作为智能行为者选择动作的依据.行动分值比传统的状态值具有更高的灵活性,因此更容易针对行动分值设计更加优化的奖赏函数,提高学习的性能.以行动分值为基础,使用了指数函数和对数函数,动态确定奖赏值与折扣系数,加快行为者选择最优动作.从走迷宫的计算机仿真程序可以看出,新算法显著减少了行为者在收敛前尝试中执行的动作次数,提高了收敛速度.
A new reinforcement learning algorithm with "action values" as a basis for an agent to choose actions is put forward to improve the design of reward signals. For action values are more flexible than traditional state values, it is easier to design more optimized reward functions and improve learning performance. Based on action values, an exponential function and a logarithmic function are used to compute action rewards and discount rate dynamically, which accelerates agents to choose optimized actions. It shows that through the computer simulation of a maze problem the new algorithm reduces action times before convergence and the convergence speed is thus enhanced.
出处
《同济大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2007年第4期531-536,共6页
Journal of Tongji University:Natural Science
基金
国家自然科学基金资助项目(60643001)
教育部新世纪优秀人才计划和上海市曙光计划项目(04SG22)
关键词
强化学习
行动分值
Q算法
奖赏函数
reinforcement learning
action values
Q algorithm
reward functions