A dynamic cooperation model of multi-agent is established by combining reinforcement learning with distributed artificial intelligence(DAI),in which the concept of individual optimization loses its meaning because of ...A dynamic cooperation model of multi-agent is established by combining reinforcement learning with distributed artificial intelligence(DAI),in which the concept of individual optimization loses its meaning because of the dependence of repayment on each agent itself and the choice of other agents.Utilizing the idea of DAI,the intellectual unit of each robot and the change of task and environment,each agent can make decisions independently and finish various complicated tasks by communication and reciprocation between each other.The method is superior to other reinforcement learning methods commonly used in the multi-agent system.It can improve the convergence velocity of reinforcement learning,decrease requirements of computer memory,and enhance the capability of computing and logical ratiocinating for agent.The result of a simulated robot soccer match proves that the proposed cooperative strategy is valid.展开更多
文摘A dynamic cooperation model of multi-agent is established by combining reinforcement learning with distributed artificial intelligence(DAI),in which the concept of individual optimization loses its meaning because of the dependence of repayment on each agent itself and the choice of other agents.Utilizing the idea of DAI,the intellectual unit of each robot and the change of task and environment,each agent can make decisions independently and finish various complicated tasks by communication and reciprocation between each other.The method is superior to other reinforcement learning methods commonly used in the multi-agent system.It can improve the convergence velocity of reinforcement learning,decrease requirements of computer memory,and enhance the capability of computing and logical ratiocinating for agent.The result of a simulated robot soccer match proves that the proposed cooperative strategy is valid.