期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Discrete-time dynamic graphical games:model-free reinforcement learning solution 被引量:6
1
作者 mohammed i.abouheaf Frank L.LEWIS +1 位作者 Magdi S.MAHMOUD Dariusz G.MIKULSKI 《Control Theory and Technology》 EI CSCD 2015年第1期55-69,共15页
This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize t... This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize to the state of a command multi-agent dynamical systems, where pinning control is used generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents' dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time. 展开更多
关键词 Dynamic graphical games Nash equilibrium discrete mechanics optimal control model-free reinforcementlearning policy iteration
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部