期刊文献+

分层的局部合作Q-学习

Hierarchical regional cooperative Q-learning
下载PDF
导出
摘要 多智能体Q-学习问题往往因为联合动作的个数指数级增长而变得无法解决。从研究分层强化学习入手,通过对强化学习中合作MAS的研究,在基于系统工作逻辑的研究基础上,提出了基于学习过程分层的局部合作强化学习,通过对独立Agent强化学习的知识考察,改进多Agent系统学习的效率,进一步提高了局部合作强化学习的效能。从而解决强化学习中的状态空间的维数灾难,并通过仿真足球的2vs1防守证明了算法的有效性。 Many multi-agent Q-learning problems can not be solved because of the number of joint actions is exponential in the number of agents.Based on the study of the cooperation in MAS in reinforcement learning and on the basis of the research in the system logic,this paper puts forward the hierarchical regional cooperation reinforcement learning based on learning process.By studying the knowledge of Agent reinforcement learning and improving the multi-Agent study effficiency,the performance of the regional cooperation reinforcement learning is further enhanced,combining with the mission action based on joint action and potential field model so as to solve the dimensional disaster in state space of reinforcement learning.This algorithm is used in a subtask of robot soccer and its effectiveness is validated by experiments.
作者 刘亮 李龙澍
出处 《计算机工程与应用》 CSCD 北大核心 2009年第22期7-9,26,共4页 Computer Engineering and Applications
基金 国家自然科学基金No.60273043 安徽省高校学科拔尖人才基金~~
关键词 多智能体系统 局部合作 Q-学习 过程分层 Multi-Agent Systems(MAS) regional cooperative Q-learning process stratification
  • 相关文献

参考文献1

  • 1Andrew G. Barto,Sridhar Mahadevan. Recent Advances in Hierarchical Reinforcement Learning[J] 2003,Discrete Event Dynamic Systems(1-2):41~77

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部