The essence of the linear search is one-dimension nonlinear minimization problem, which is an important part of the multi-nonlinear optimization, it will be spend the most of operation count for solving optimization p...The essence of the linear search is one-dimension nonlinear minimization problem, which is an important part of the multi-nonlinear optimization, it will be spend the most of operation count for solving optimization problem. To improve the efficiency, we set about from quadratic interpolation, combine the advantage of the quadratic convergence rate of Newton's method and adopt the idea of Anderson-Bjorck extrapolation, then we present a rapidly convergence algorithm and give its corresponding convergence conclusions. Finally we did the numerical experiments with the some well-known test functions for optimization and the application test of the ANN learning examples. The experiment results showed the validity of the algorithm.展开更多
Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, anothe...Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, another free form cost function was introduced to express the physical need plainly and optimize weights of LQ cost function using the search algorithms. As an instance, DLQR was applied in determining the control input in the front steering angle compensation control (FSAC) model for heavy duty vehicles. The brief simulations show that DLQR is powerful enough to specify the engineering requirements correctly and balance many factors effectively. The concept and applicable field of LQR are expanded by DLQR to optimize the system with a free form cost function.展开更多
文摘The essence of the linear search is one-dimension nonlinear minimization problem, which is an important part of the multi-nonlinear optimization, it will be spend the most of operation count for solving optimization problem. To improve the efficiency, we set about from quadratic interpolation, combine the advantage of the quadratic convergence rate of Newton's method and adopt the idea of Anderson-Bjorck extrapolation, then we present a rapidly convergence algorithm and give its corresponding convergence conclusions. Finally we did the numerical experiments with the some well-known test functions for optimization and the application test of the ANN learning examples. The experiment results showed the validity of the algorithm.
文摘Double cost function linear quadratic regulator (DLQR) is developed from LQR theory to solve an optimal control problem with a general nonlinear cost function. In addition to the traditional LQ cost function, another free form cost function was introduced to express the physical need plainly and optimize weights of LQ cost function using the search algorithms. As an instance, DLQR was applied in determining the control input in the front steering angle compensation control (FSAC) model for heavy duty vehicles. The brief simulations show that DLQR is powerful enough to specify the engineering requirements correctly and balance many factors effectively. The concept and applicable field of LQR are expanded by DLQR to optimize the system with a free form cost function.