期刊文献+
共找到39篇文章
< 1 2 >
每页显示 20 50 100
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications 被引量:2
1
作者 Ding Wang Ning Gao +2 位作者 Derong Liu Jinna Li Frank L.Lewis 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期18-36,共19页
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ... Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence. 展开更多
关键词 adaptive dynamic programming(ADP) advanced control complex environment data-driven control event-triggered design intelligent control neural networks nonlinear systems optimal control reinforcement learning(RL)
下载PDF
Adaptive Optimal Discrete-Time Output-Feedback Using an Internal Model Principle and Adaptive Dynamic Programming
2
作者 Zhongyang Wang Youqing Wang Zdzisław Kowalczuk 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期131-140,共10页
In order to address the output feedback issue for linear discrete-time systems, this work suggests a brand-new adaptive dynamic programming(ADP) technique based on the internal model principle(IMP). The proposed metho... In order to address the output feedback issue for linear discrete-time systems, this work suggests a brand-new adaptive dynamic programming(ADP) technique based on the internal model principle(IMP). The proposed method, termed as IMP-ADP, does not require complete state feedback-merely the measurement of input and output data. More specifically, based on the IMP, the output control problem can first be converted into a stabilization problem. We then design an observer to reproduce the full state of the system by measuring the inputs and outputs. Moreover, this technique includes both a policy iteration algorithm and a value iteration algorithm to determine the optimal feedback gain without using a dynamic system model. It is important that with this concept one does not need to solve the regulator equation. Finally, this control method was tested on an inverter system of grid-connected LCLs to demonstrate that the proposed method provides the desired performance in terms of both tracking and disturbance rejection. 展开更多
关键词 adaptive dynamic programming(ADP) internal model principle(IMP) output feedback problem policy iteration(PI) value iteration(VI)
下载PDF
Event-based performance guaranteed tracking control for constrained nonlinear system via adaptive dynamic programming method
3
作者 Xingyi Zhang Zijie Guo +1 位作者 Hongru Ren Hongyi Li 《Journal of Automation and Intelligence》 2023年第4期239-247,共9页
An optimal tracking control problem for a class of nonlinear systems with guaranteed performance and asymmetric input constraints is discussed in this paper.The control policy is implemented by adaptive dynamic progra... An optimal tracking control problem for a class of nonlinear systems with guaranteed performance and asymmetric input constraints is discussed in this paper.The control policy is implemented by adaptive dynamic programming(ADP)algorithm under two event-based triggering mechanisms.It is often challenging to design an optimal control law due to the system deviation caused by asymmetric input constraints.First,a prescribed performance control technique is employed to guarantee the tracking errors within predetermined boundaries.Subsequently,considering the asymmetric input constraints,a discounted non-quadratic cost function is introduced.Moreover,in order to reduce controller updates,an event-triggered control law is developed for ADP algorithm.After that,to further simplify the complexity of controller design,this work is extended to a self-triggered case for relaxing the need for continuous signal monitoring by hardware devices.By employing the Lyapunov method,the uniform ultimate boundedness of all signals is proved to be guaranteed.Finally,a simulation example on a mass–spring–damper system subject to asymmetric input constraints is provided to validate the effectiveness of the proposed control scheme. 展开更多
关键词 adaptive dynamic programming(ADP) Asymmetric input constraints Prescribed performance control Event-triggered control Optimal tracking control
下载PDF
Parallel Control for Optimal Tracking via Adaptive Dynamic Programming 被引量:20
4
作者 Jingwei Lu Qinglai Wei Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2020年第6期1662-1674,共13页
This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is int... This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is introduced into the feedback system.However,due to the introduction of control input into the feedback system,the optimal state feedback control methods can not be applied directly.To address this problem,an augmented system and an augmented performance index function are proposed firstly.Thus,the general nonlinear system is transformed into an affine nonlinear system.The difference between the optimal parallel control and the optimal state feedback control is analyzed theoretically.It is proven that the optimal parallel control with the augmented performance index function can be seen as the suboptimal state feedback control with the traditional performance index function.Moreover,an adaptive dynamic programming(ADP)technique is utilized to implement the optimal parallel tracking control using a critic neural network(NN)to approximate the value function online.The stability analysis of the closed-loop system is performed using the Lyapunov theory,and the tracking error and NN weights errors are uniformly ultimately bounded(UUB).Also,the optimal parallel controller guarantees the continuity of the control input under the circumstance that there are finite jump discontinuities in the reference signals.Finally,the effectiveness of the developed optimal parallel control method is verified in two cases. 展开更多
关键词 adaptive dynamic programming(ADP) nonlinear optimal control parallel controller parallel control theory parallel system tracking control neural network(NN)
下载PDF
Optimal Constrained Self-learning Battery Sequential Management in Microgrid Via Adaptive Dynamic Programming 被引量:14
5
作者 Qinglai Wei Derong Liu +1 位作者 Yu Liu Ruizhuo Song 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第2期168-176,共9页
This paper concerns a novel optimal self-learning battery sequential control scheme for smart home energy systems.The main idea is to use the adaptive dynamic programming(ADP) technique to obtain the optimal battery s... This paper concerns a novel optimal self-learning battery sequential control scheme for smart home energy systems.The main idea is to use the adaptive dynamic programming(ADP) technique to obtain the optimal battery sequential control iteratively. First, the battery energy management system model is established, where the power efficiency of the battery is considered. Next, considering the power constraints of the battery, a new non-quadratic form performance index function is established, which guarantees that the value of the iterative control law cannot exceed the maximum charging/discharging power of the battery to extend the service life of the battery.Then, the convergence properties of the iterative ADP algorithm are analyzed, which guarantees that the iterative value function and the iterative control law both reach the optimums. Finally,simulation and comparison results are given to illustrate the performance of the presented method. 展开更多
关键词 adaptive critic designs adaptive dynamic programming(ADP) approximate dynamic programming battery management energy management system neuro-dynamic programming optimal control smart home
下载PDF
Residential Energy Scheduling for Variable Weather Solar Energy Based on Adaptive Dynamic Programming 被引量:14
6
作者 Derong Liu Yancai Xu +1 位作者 Qinglai Wei Xinliang Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第1期36-46,共11页
The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable ener... The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable energy resources, are combined together as a nonlinear, time-varying, indefinite and complex system, which is difficult to manage or optimize. Many nations have already applied the residential real-time pricing to balance the burden on their grid. In order to enhance electricity efficiency of the residential micro grid, this paper presents an action dependent heuristic dynamic programming(ADHDP) method to solve the residential energy scheduling problem. The highlights of this paper are listed below. First,the weather-type classification is adopted to establish three types of programming models based on the features of the solar energy. In addition, the priorities of different energy resources are set to reduce the loss of electrical energy transmissions.Second, three ADHDP-based neural networks, which can update themselves during applications, are designed to manage the flows of electricity. Third, simulation results show that the proposed scheduling method has effectively reduced the total electricity cost and improved load balancing process. The comparison with the particle swarm optimization algorithm further proves that the present method has a promising effect on energy management to save cost. 展开更多
关键词 Action dependent heuristic dynamic programming adaptive dynamic programming control strategy residential energy management smart grid
下载PDF
Policy iteration optimal tracking control for chaotic systems by using an adaptive dynamic programming approach 被引量:1
7
作者 魏庆来 刘德荣 徐延才 《Chinese Physics B》 SCIE EI CAS CSCD 2015年第3期87-94,共8页
A policy iteration algorithm of adaptive dynamic programming(ADP) is developed to solve the optimal tracking control for a class of discrete-time chaotic systems. By system transformations, the optimal tracking prob... A policy iteration algorithm of adaptive dynamic programming(ADP) is developed to solve the optimal tracking control for a class of discrete-time chaotic systems. By system transformations, the optimal tracking problem is transformed into an optimal regulation one. The policy iteration algorithm for discrete-time chaotic systems is first described. Then,the convergence and admissibility properties of the developed policy iteration algorithm are presented, which show that the transformed chaotic system can be stabilized under an arbitrary iterative control law and the iterative performance index function simultaneously converges to the optimum. By implementing the policy iteration algorithm via neural networks,the developed optimal tracking control scheme for chaotic systems is verified by a simulation. 展开更多
关键词 adaptive critic designs adaptive dynamic programming approximate dynamic programming neuro-dynamic programming
下载PDF
Adaptive fault-tolerant control for non-minimum phase hypersonic vehicles based on adaptive dynamic programming
8
作者 Le WANG Ruiyun QI Bin JIANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2024年第3期290-311,共22页
In this paper,a novel adaptive Fault-Tolerant Control(FTC)strategy is proposed for non-minimum phase Hypersonic Vehicles(HSVs)that are affected by actuator faults and parameter uncertainties.The strategy is based on t... In this paper,a novel adaptive Fault-Tolerant Control(FTC)strategy is proposed for non-minimum phase Hypersonic Vehicles(HSVs)that are affected by actuator faults and parameter uncertainties.The strategy is based on the output redefinition method and Adaptive Dynamic Programming(ADP).The intelligent FTC scheme consists of two main parts:a basic fault-tolerant and stable controller and an ADP-based supplementary controller.In the basic FTC part,an output redefinition approach is designed to make zero-dynamics stable with respect to the new output.Then,Ideal Internal Dynamic(IID)is obtained using an optimal bounded inversion approach,and a tracking controller is designed for the new output to realize output tracking of the nonminimum phase HSV system.For the ADP-based compensation control part,an ActionDependent Heuristic Dynamic Programming(ADHDP)adopting an actor-critic learning structure is utilized to further optimize the tracking performance of the HSV control system.Finally,simulation results are provided to verify the effectiveness and efficiency of the proposed FTC algorithm. 展开更多
关键词 Hypersonic vehicle Fault-tolerant control Non-minimum phase system adaptive control Nonlinear control adaptive dynamic programming
原文传递
Adaptive dynamic programming for online solution of a zero-sum differential game 被引量:10
9
作者 Draguna VRABIE Frank LEWIS 《控制理论与应用(英文版)》 EI 2011年第3期353-360,共8页
This paper will present an approximate/adaptive dynamic programming(ADP) algorithm,that uses the idea of integral reinforcement learning(IRL),to determine online the Nash equilibrium solution for the two-player zerosu... This paper will present an approximate/adaptive dynamic programming(ADP) algorithm,that uses the idea of integral reinforcement learning(IRL),to determine online the Nash equilibrium solution for the two-player zerosum differential game with linear dynamics and infinite horizon quadratic cost.The algorithm is built around an iterative method that has been developed in the control engineering community for solving the continuous-time game algebraic Riccati equation(CT-GARE),which underlies the game problem.We here show how the ADP techniques will enhance the capabilities of the offline method allowing an online solution without the requirement of complete knowledge of the system dynamics.The feasibility of the ADP scheme is demonstrated in simulation for a power system control application.The adaptation goal is the best control policy that will face in an optimal manner the highest load disturbance. 展开更多
关键词 Approximate/adaptive dynamic programming Game algebraic Riccati equation Zero-sum differential game Nash equilibrium
原文传递
Adaptive event-triggered distributed optimal guidance design via adaptive dynamic programming 被引量:4
10
作者 Teng LONG Yan CAO +1 位作者 Jingliang SUN Guangtong XU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第7期113-127,共15页
In this paper,the multi-missile cooperative guidance system is formulated as a general nonlinear multi-agent system.To save the limited communication resources,an adaptive eventtriggered optimal guidance law is propos... In this paper,the multi-missile cooperative guidance system is formulated as a general nonlinear multi-agent system.To save the limited communication resources,an adaptive eventtriggered optimal guidance law is proposed by designing a synchronization-error-driven triggering condition,which brings together the consensus control with Adaptive Dynamic Programming(ADP)technique.Then,the developed event-triggered distributed control law can be employed by finding an approximate solution of event-triggered coupled Hamilton-Jacobi-Bellman(HJB)equation.To address this issue,the critic network architecture is constructed,in which an adaptive weight updating law is designed for estimating the cooperative optimal cost function online.Therefore,the event-triggered closed-loop system is decomposed into two subsystems:the system with flow dynamics and the system with jump dynamics.By using Lyapunov method,the stability of this closed-loop system is guaranteed and all signals are ensured to be Uniformly Ultimately Bounded(UUB).Furthermore,the Zeno behavior is avoided.Simulation results are finally provided to demonstrate the effectiveness of the proposed method. 展开更多
关键词 adaptive dynamic programming Distributed control Event-triggered Guidance and control Multi-agent system
原文传递
Finite horizon optimal control of discrete-time nonlinear systems with unfixed initial state using adaptive dynamic programming 被引量:2
11
作者 Wei, Qinglai Liu, Derong 《控制理论与应用(英文版)》 EI 2011年第3期381-390,共10页
In this paper,we aim to solve the finite horizon optimal control problem for a class of discrete-time nonlinear systems with unfixed initial state using adaptive dynamic programming(ADP) approach.A new-optimal control... In this paper,we aim to solve the finite horizon optimal control problem for a class of discrete-time nonlinear systems with unfixed initial state using adaptive dynamic programming(ADP) approach.A new-optimal control algorithm based on the iterative ADP approach is proposed which makes the performance index function converge iteratively to the greatest lower bound of all performance indices within an error according to within finite time.The optimal number of control steps can also be obtained by the proposed-optimal control algorithm for the situation where the initial state of the system is unfixed.Neural networks are used to approximate the performance index function and compute the optimal control policy,respectively,for facilitating the implementation of the-optimal control algorithm.Finally,a simulation example is given to show the results of the proposed method. 展开更多
关键词 adaptive dynamic programming Unfixed initial state Optimal control Finite time Neural networks
原文传递
Neural-network-based stochastic linear quadratic optimal tracking control scheme for unknown discrete-time systems using adaptive dynamic programming 被引量:2
12
作者 Xin Chen Fang Wang 《Control Theory and Technology》 EI CSCD 2021年第3期315-327,共13页
In this paper,a stochastic linear quadratic optimal tracking scheme is proposed for unknown linear discrete-time(DT)systems based on adaptive dynamic programming(ADP)algorithm.First,an augmented system composed of the... In this paper,a stochastic linear quadratic optimal tracking scheme is proposed for unknown linear discrete-time(DT)systems based on adaptive dynamic programming(ADP)algorithm.First,an augmented system composed of the original system and the command generator is constructed and then an augmented stochastic algebraic equation is derived based on the augmented system.Next,to obtain the optimal control strategy,the stochastic case is converted into the deterministic one by system transformation,and then an ADP algorithm is proposed with convergence analysis.For the purpose of realizing the ADP algorithm,three back propagation neural networks including model network,critic network and action network are devised to guarantee unknown system model,optimal value function and optimal control strategy,respectively.Finally,the obtained optimal control strategy is applied to the original stochastic system,and two simulations are provided to demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Stochastic system Optimal tracking control adaptive dynamic programming Neural networks
原文传递
Optimal regulation of uncertain dynamic systems using adaptive dynamic programming 被引量:2
13
作者 Hao Xu Qiming Zhao S.Jagannathan 《Journal of Control and Decision》 EI 2014年第3期226-256,共31页
In this tutorial paper,the finite-horizon optimal adaptive regulation of linear and nonlinear dynamic systems with unknown system dynamics is presented in a forward-in-time manner using adaptive dynamic programming(AD... In this tutorial paper,the finite-horizon optimal adaptive regulation of linear and nonlinear dynamic systems with unknown system dynamics is presented in a forward-in-time manner using adaptive dynamic programming(ADP).An adaptive estimator(AE)is introduced with the idea of Q-learning to relax the requirement of system dynamics in the case of linear system,while neural network-based identifier is utilised for nonlinear systems.The time-varying nature of the solution to the Bellman/Hamilton–Jacobi–Bellman equation is handled by utilising a time-dependent basis function,while the terminal constraint is incorporated as part of the update law of the AE/Identifier in solving the optimal feedback control.Utilising an initial admissible control,the proposed optimal regulation scheme of the uncertain linear and nonlinear system yields a forward-in-time and online solution without using value and/or policy iterations.An adaptive observer is utilised for linear systems in order to relax the need for state availability so that the optimal adaptive control design depends only on the reconstructed states.Finally,the optimal control is covered for nonlinear-networked control systems where in the feedback loop is closed via a communication network.Effectiveness of the proposed approach is verified by simulation results.The end result is a variant of a roll-out scheme in ADP wherein an initial admissible policy is selected as the base policy and the control policy is enhanced using a one-time policy improvement at each sampling interval. 展开更多
关键词 adaptive dynamic programming finite horizon optimal control
原文传递
Value Iteration-Based Cooperative Adaptive Optimal Control for Multi-Player Differential Games With Incomplete Information
14
作者 Yun Zhang Lulu Zhang Yunze Cai 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期690-697,共8页
This paper presents a novel cooperative value iteration(VI)-based adaptive dynamic programming method for multi-player differential game models with a convergence proof.The players are divided into two groups in the l... This paper presents a novel cooperative value iteration(VI)-based adaptive dynamic programming method for multi-player differential game models with a convergence proof.The players are divided into two groups in the learning process and adapt their policies sequentially.Our method removes the dependence of admissible initial policies,which is one of the main drawbacks of the PI-based frameworks.Furthermore,this algorithm enables the players to adapt their control policies without full knowledge of others’ system parameters or control laws.The efficacy of our method is illustrated by three examples. 展开更多
关键词 adaptive dynamic programming incomplete information multi-player differential game value iteration
下载PDF
Adaptive dynamic programming for linear impulse systems
15
作者 Xiao-hua WANG Juan-juan YU +2 位作者 Yao HUANG Hua WANG Zhong-hua MIAO 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第1期43-50,共8页
We investigate the optimization of linear impulse systems with the reinforcement learning based adaptive dynamic programming(ADP)method.For linear impulse systems,the optimal objective function is shown to be a quadri... We investigate the optimization of linear impulse systems with the reinforcement learning based adaptive dynamic programming(ADP)method.For linear impulse systems,the optimal objective function is shown to be a quadric form of the pre-impulse states.The ADP method provides solutions that iteratively converge to the optimal objective function.If an initial guess of the pre-impulse objective function is selected as a quadratic form of the pre-impulse states,the objective function iteratively converges to the optimal one through ADP.Though direct use of the quadratic objective function of the states within the ADP method is theoretically possible,the numerical singularity problem may occur due to the matrix inversion therein when the system dimensionality increases.A neural network based ADP method can circumvent this problem.A neural network with polynomial activation functions is selected to approximate the pre-impulse objective function and trained iteratively using the ADP method to achieve optimal control.After a successful training,optimal impulse control can be derived.Simulations are presented for illustrative purposes. 展开更多
关键词 adaptive dynamic programming(ADP) Impulse system Optimal control Neural network
原文传递
State of the Art of Adaptive Dynamic Programming and Reinforcement Learning
16
作者 Derong Liu Mingming Ha Shan Xue 《CAAI Artificial Intelligence Research》 2022年第2期93-110,共18页
This article introduces the state-of-the-art development of adaptive dynamic programming and reinforcement learning(ADPRL).First,algorithms in reinforcement learning(RL)are introduced and their roots in dynamic progra... This article introduces the state-of-the-art development of adaptive dynamic programming and reinforcement learning(ADPRL).First,algorithms in reinforcement learning(RL)are introduced and their roots in dynamic programming are illustrated.Adaptive dynamic programming(ADP)is then introduced following a brief discussion of dynamic programming.Researchers in ADP and RL have enjoyed the fast developments of the past decade from algorithms,to convergence and optimality analyses,and to stability results.Several key steps in the recent theoretical developments of ADPRL are mentioned with some future perspectives.In particular,convergence and optimality results of value iteration and policy iteration are reviewed,followed by an introduction to the most recent results on stability analysis of value iteration algorithms. 展开更多
关键词 adaptive dynamic programming approximate dynamic programming adaptive critic designs neuro-dynamic programming neural dynamic programming reinforcement learning intelligent control learning control optimal control
原文传递
PDP: Parallel Dynamic Programming 被引量:15
17
作者 Fei-Yue Wang Jie Zhang +2 位作者 Qinglai Wei Xinhu Zheng Li Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第1期1-5,共5页
Deep reinforcement learning is a focus research area in artificial intelligence. The principle of optimality in dynamic programming is a key to the success of reinforcement learning methods. The principle of adaptive ... Deep reinforcement learning is a focus research area in artificial intelligence. The principle of optimality in dynamic programming is a key to the success of reinforcement learning methods. The principle of adaptive dynamic programming(ADP)is first presented instead of direct dynamic programming(DP),and the inherent relationship between ADP and deep reinforcement learning is developed. Next, analytics intelligence, as the necessary requirement, for the real reinforcement learning, is discussed. Finally, the principle of the parallel dynamic programming, which integrates dynamic programming and analytics intelligence, is presented as the future computational intelligence. 展开更多
关键词 Parallel dynamic programming dynamic programming adaptive dynamic programming Reinforcement learning Deep learning Neural networks Artificial intelligence
下载PDF
A novel stable value iteration-based approximate dynamic programming algorithm for discrete-time nonlinear systems
18
作者 曲延华 王安娜 林盛 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第1期228-235,共8页
The convergence and stability of a value-iteration-based adaptive dynamic programming (ADP) algorithm are con- sidered for discrete-time nonlinear systems accompanied by a discounted quadric performance index. More ... The convergence and stability of a value-iteration-based adaptive dynamic programming (ADP) algorithm are con- sidered for discrete-time nonlinear systems accompanied by a discounted quadric performance index. More importantly than sufficing to achieve a good approximate structure, the iterative feedback control law must guarantee the closed-loop stability. Specifically, it is firstly proved that the iterative value function sequence will precisely converge to the optimum. Secondly, the necessary and sufficient condition of the optimal value function serving as a Lyapunov function is investi- gated. We prove that for the case of infinite horizon, there exists a finite horizon length of which the iterative feedback control law will provide stability, and this increases the practicability of the proposed value iteration algorithm. Neural networks (NNs) are employed to approximate the value functions and the optimal feedback control laws, and the approach allows the implementation of the algorithm without knowing the internal dynamics of the system. Finally, a simulation example is employed to demonstrate the effectiveness of the developed optimal control method. 展开更多
关键词 adaptive dynamic programming (ADP) CONVERGENCE STABILITY discounted quadric performanceindex
下载PDF
Discounted Iterative Adaptive Critic Designs With Novel Stability Analysis for Tracking Control 被引量:6
19
作者 Mingming Ha Ding Wang Derong Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1262-1272,共11页
The core task of tracking control is to make the controlled plant track a desired trajectory.The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of t... The core task of tracking control is to make the controlled plant track a desired trajectory.The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases.In this paper,a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem.Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function.A novel stability analysis method is developed to guarantee that the tracking error converges to zero.The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally,the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches. 展开更多
关键词 adaptive critic design adaptive dynamic programming(ADP) approximate dynamic programming discrete-time nonlinear systems reinforcement learning stability analysis tracking control value iteration(VI)
下载PDF
Chaotic system optimal tracking using data-based synchronous method with unknown dynamics and disturbances
20
作者 宋睿卓 魏庆来 《Chinese Physics B》 SCIE EI CAS CSCD 2017年第3期268-275,共8页
We develop an optimal tracking control method for chaotic system with unknown dynamics and disturbances. The method allows the optimal cost function and the corresponding tracking control to update synchronously. Acco... We develop an optimal tracking control method for chaotic system with unknown dynamics and disturbances. The method allows the optimal cost function and the corresponding tracking control to update synchronously. According to the tracking error and the reference dynamics, the augmented system is constructed. Then the optimal tracking control problem is defined. The policy iteration (PI) is introduced to solve the rain-max optimization problem. The off-policy adaptive dynamic programming (ADP) algorithm is then proposed to find the solution of the tracking Hamilton-Jacobi- Isaacs (HJI) equation online only using measured data and without any knowledge about the system dynamics. Critic neural network (CNN), action neural network (ANN), and disturbance neural network (DNN) are used to approximate the cost function, control, and disturbance. The weights of these networks compose the augmented weight matrix, and the uniformly ultimately bounded (UUB) of which is proven. The convergence of the tracking error system is also proven. Two examples are given to show the effectiveness of the proposed synchronous solution method for the chaotic system tracking problem. 展开更多
关键词 adaptive dynamic programming approximate dynamic programming chaotic system ZERO-SUM
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部