Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, anothe...Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, another free form cost function was introduced to express the physical need plainly and optimize weights of LQ cost function using the search algorithms. As an instance, DLQR was applied in determining the control input in the front steering angle compensation control (FSAC) model for heavy duty vehicles. The brief simulations show that DLQR is powerful enough to specify the engineering requirements correctly and balance many factors effectively. The concept and applicable field of LQR are expanded by DLQR to optimize the system with a free form cost function.展开更多
To get better tracking performance of attitude command over the reentry phase of vehicles, the use of state-dependent Riccati equation (SDRE) method for attitude controller design of reentry vehicles was investigated....To get better tracking performance of attitude command over the reentry phase of vehicles, the use of state-dependent Riccati equation (SDRE) method for attitude controller design of reentry vehicles was investigated. Guidance commands are generated based on optimal guidance law. SDRE control method employs factorization of the nonlinear dynamics into a state vector and state dependent matrix valued function. State-dependent coefficients are derived based on reentry motion equations in pitch and yaw channels. Unlike constant weighting matrix Q, elements of Q are set as the functions of state error so as to get satisfactory feedback and eliminate state error rapidly, then formulation of SDRE is realized. Riccati equation is solved real-timely with Schur algorithm. State feedback control law u(x) is derived with linear quadratic regulator (LQR) method. Simulation results show that SDRE controller steadily tracks attitude command, and impact point error of reentry vehicle is acceptable. Compared with PID controller, tracking performance of attitude command using SDRE controller is better with smaller control surface deflection. The attitude tracking error with SDRE controller is within 5°, and the control deflection is within 30°.展开更多
In this paper,three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator(LQR) perspective with cost functions consisting of both interact...In this paper,three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator(LQR) perspective with cost functions consisting of both interaction energy cost and individual energy cost,because both the collective ob ject(such as formation or consensus) and the individual goal of each agent are very important for the overall system.First,we propose the optimal formation algorithm for first-order multi-agent systems without initial physical couplings.The optimal control parameter matrix of the algorithm is the solution to an algebraic Riccati equation(ARE).It is shown that the matrix is the sum of a Laplacian matrix and a positive definite diagonal matrix.Next,for physically interconnected multi-agent systems,the optimal formation algorithm is presented,and the corresponding parameter matrix is given from the solution to a group of quadratic equations with one unknown.Finally,if the communication topology between agents is fixed,the local feedback gain is obtained from the solution to a quadratic equation with one unknown.The equation is derived from the derivative of the cost function with respect to the local feedback gain.Numerical examples are provided to validate the effectiveness of the proposed approaches and to illustrate the geometrical performances of multi-agent systems.展开更多
文摘Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, another free form cost function was introduced to express the physical need plainly and optimize weights of LQ cost function using the search algorithms. As an instance, DLQR was applied in determining the control input in the front steering angle compensation control (FSAC) model for heavy duty vehicles. The brief simulations show that DLQR is powerful enough to specify the engineering requirements correctly and balance many factors effectively. The concept and applicable field of LQR are expanded by DLQR to optimize the system with a free form cost function.
基金Project(51105287)supported by the National Natural Science Foundation of China
文摘To get better tracking performance of attitude command over the reentry phase of vehicles, the use of state-dependent Riccati equation (SDRE) method for attitude controller design of reentry vehicles was investigated. Guidance commands are generated based on optimal guidance law. SDRE control method employs factorization of the nonlinear dynamics into a state vector and state dependent matrix valued function. State-dependent coefficients are derived based on reentry motion equations in pitch and yaw channels. Unlike constant weighting matrix Q, elements of Q are set as the functions of state error so as to get satisfactory feedback and eliminate state error rapidly, then formulation of SDRE is realized. Riccati equation is solved real-timely with Schur algorithm. State feedback control law u(x) is derived with linear quadratic regulator (LQR) method. Simulation results show that SDRE controller steadily tracks attitude command, and impact point error of reentry vehicle is acceptable. Compared with PID controller, tracking performance of attitude command using SDRE controller is better with smaller control surface deflection. The attitude tracking error with SDRE controller is within 5°, and the control deflection is within 30°.
基金supported by the National Natural Science Foundation of China(No.61375072)(50%)the Natural Science Foundation of Zhejiang Province,China(No.LQ16F030005)(50%)
文摘In this paper,three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator(LQR) perspective with cost functions consisting of both interaction energy cost and individual energy cost,because both the collective ob ject(such as formation or consensus) and the individual goal of each agent are very important for the overall system.First,we propose the optimal formation algorithm for first-order multi-agent systems without initial physical couplings.The optimal control parameter matrix of the algorithm is the solution to an algebraic Riccati equation(ARE).It is shown that the matrix is the sum of a Laplacian matrix and a positive definite diagonal matrix.Next,for physically interconnected multi-agent systems,the optimal formation algorithm is presented,and the corresponding parameter matrix is given from the solution to a group of quadratic equations with one unknown.Finally,if the communication topology between agents is fixed,the local feedback gain is obtained from the solution to a quadratic equation with one unknown.The equation is derived from the derivative of the cost function with respect to the local feedback gain.Numerical examples are provided to validate the effectiveness of the proposed approaches and to illustrate the geometrical performances of multi-agent systems.