摘要
MAS的协作机制研究 ,当前比较适用的研究框架是非零和 Markov对策及基于 Q-算法的强化学习 .但实际上在这种框架下的 Agent强调独立学习而不考虑其他 Agent的行为 ,故 MAS缺乏协作机制 .并且 ,Q-算法要求 Agent与环境的交互时具有完备的观察信息 ,这种情况过于理想化 .文中针对以上两个不足 ,提出了在联合行动和不完备信息下的协调学习 .理论分析和仿真实验表明 ,协调学习算法具有收敛性 .
Non zero-sum Markov game and reinforcement learning based on Q-algorithm is a feasible frame for the research on the mechanism of multiagent system's cooperation. In fact, the independent learning is focused on individual agent regardless of other agents' actions under this frame. So, the cooperative mechanism is deficient. And, it is over idealized that the perfect observed information is required when agents are interacting with environment. A cooperated learning under joined action and imperfect information was proposed for solving these two problems. Convergence of the improved algorithm was proved.
出处
《上海交通大学学报》
EI
CAS
CSCD
北大核心
2001年第2期288-292,共5页
Journal of Shanghai Jiaotong University
基金
国家自然科学基金!资助项目 (3930 0 70 )