期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A Novel Distributed Optimal Adaptive Control Algorithm for Nonlinear Multi-Agent Differential Graphical Games 被引量:5
1
作者 Majid Mazouchi Mohammad Bagher Naghibi-Sistani Seyed Kamal Hosseini Sani 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第1期331-341,共11页
In this paper, an online optimal distributed learning algorithm is proposed to solve leader-synchronization problem of nonlinear multi-agent differential graphical games. Each player approximates its optimal control p... In this paper, an online optimal distributed learning algorithm is proposed to solve leader-synchronization problem of nonlinear multi-agent differential graphical games. Each player approximates its optimal control policy using a single-network approximate dynamic programming(ADP) where only one critic neural network(NN) is employed instead of typical actorcritic structure composed of two NNs. The proposed distributed weight tuning laws for critic NNs guarantee stability in the sense of uniform ultimate boundedness(UUB) and convergence of control policies to the Nash equilibrium. In this paper, by introducing novel distributed local operators in weight tuning laws, there is no more requirement for initial stabilizing control policies. Furthermore, the overall closed-loop system stability is guaranteed by Lyapunov stability analysis. Finally, Simulation results show the effectiveness of the proposed algorithm. 展开更多
关键词 Approximate dynamic programming(ADP) distributed control neural networks(NNs) nonlinear differentia graphical games optimal control
下载PDF
Discrete-time dynamic graphical games:model-free reinforcement learning solution 被引量:6
2
作者 Mohammed I.ABOUHEAF Frank L.LEWIS +1 位作者 Magdi S.MAHMOUD Dariusz G.MIKULSKI 《Control Theory and Technology》 EI CSCD 2015年第1期55-69,共15页
This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize t... This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from to make all the agents synchronize to the state of a command multi-agent dynamical systems, where pinning control is used generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents' dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time. 展开更多
关键词 Dynamic graphical games Nash equilibrium discrete mechanics optimal control model-free reinforcementlearning policy iteration
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部