Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
This study proposes an efficient indirect approach for general nonlinear dynamic optimization problems without path constraints. The approach incorporates the virtues both from indirect and direct methods: it solves t...This study proposes an efficient indirect approach for general nonlinear dynamic optimization problems without path constraints. The approach incorporates the virtues both from indirect and direct methods: it solves the optimality conditions like the traditional indirect methods do, but uses a discretization technique inspired from direct methods. Compared with other indirect approaches, the proposed approach has two main advantages: (1) the discretized optimization problem only employs unconstrained nonlinear programming (NLP) algorithms such as BFGS (Broyden-Fletcher-Goldfarb-Shanno), rather than constrained NLP algorithms, therefore the computational efficiency is increased; (2) the relationship between the number of the discretized time intervals and the integration error of the four-step Adams predictor-corrector algorithm is established, thus the minimal number of time intervals that under desired integration tolerance can be estimated. The classic batch reactor problem is tested and compared in detail with literature reports, and the results reveal the effectiveness of the proposed approach. Dealing with path constraints requires extra techniques, and will be studied in the second paper.展开更多
This paper proposes an arlene scaling derivative-free trust region method with interior backtracking technique for bounded-constrained nonlinear programming. This method is designed to get a stationary point for such ...This paper proposes an arlene scaling derivative-free trust region method with interior backtracking technique for bounded-constrained nonlinear programming. This method is designed to get a stationary point for such a problem with polynomial interpolation models instead of the objective function in trust region subproblem. Combined with both trust region strategy and line search technique, at each iteration, the affine scaling derivative-free trust region subproblem generates a backtracking direction in order to obtain a new accepted interior feasible step. Global convergence and fast local convergence properties are established under some reasonable conditions. Some numerical results are also given to show the effectiveness of the proposed algorithm.展开更多
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.
基金Supported by the National Natural Science Foundation of China (U1162130)the National High Technology Research and Development Program of China (2006AA05Z226)the Outstanding Youth Science Foundation,Zhejiang Province (R4100133)
文摘This study proposes an efficient indirect approach for general nonlinear dynamic optimization problems without path constraints. The approach incorporates the virtues both from indirect and direct methods: it solves the optimality conditions like the traditional indirect methods do, but uses a discretization technique inspired from direct methods. Compared with other indirect approaches, the proposed approach has two main advantages: (1) the discretized optimization problem only employs unconstrained nonlinear programming (NLP) algorithms such as BFGS (Broyden-Fletcher-Goldfarb-Shanno), rather than constrained NLP algorithms, therefore the computational efficiency is increased; (2) the relationship between the number of the discretized time intervals and the integration error of the four-step Adams predictor-corrector algorithm is established, thus the minimal number of time intervals that under desired integration tolerance can be estimated. The classic batch reactor problem is tested and compared in detail with literature reports, and the results reveal the effectiveness of the proposed approach. Dealing with path constraints requires extra techniques, and will be studied in the second paper.
基金supported by the National Science Foundation of China under Grant No.11371253
文摘This paper proposes an arlene scaling derivative-free trust region method with interior backtracking technique for bounded-constrained nonlinear programming. This method is designed to get a stationary point for such a problem with polynomial interpolation models instead of the objective function in trust region subproblem. Combined with both trust region strategy and line search technique, at each iteration, the affine scaling derivative-free trust region subproblem generates a backtracking direction in order to obtain a new accepted interior feasible step. Global convergence and fast local convergence properties are established under some reasonable conditions. Some numerical results are also given to show the effectiveness of the proposed algorithm.