摘要
针对Robo Cup(Robot World Cup)中,多Agent之间的配合策略问题,采用了一种局部合作的多Agent Q-学习方法:通过细分球场区域和Agent回报值的方法,加强了Agent之间的协作能力,从而增强了队伍的进攻和防守能力。同时通过约束此算法的使用范围,减少了学习所用的时间,确保了比赛的实时性。最后在仿真2D平台上进行的实验证明,该方法比以前的效果更好,完全符合初期的设计目标。
Because many multi-Agent cooperative problems can hardly be solved in RoboCup, this paper investigates a regional cooperative multi-Agent Q-learning method. Through subdividing the stadium area and rewards of agents, the agents’collaboration ability can be strengthened. As a result, the team’s offensive and defensive abilities are enhanced. At the same time, the agents can spend less time learning via restricting the using range of the algorithm. Consequently, the real-time of the game can be ensured. Finally, the experiment on the platform of the simulation 2D proves that the effect of this method is much better than that of the previous one, and it fully complies with the design of the original goal.
出处
《计算机工程与应用》
CSCD
2014年第23期127-130,共4页
Computer Engineering and Applications
基金
安徽省自然科学基金(No.090412054)
安徽高等学校省级自然科学基金(No.KJ2011Z020)