The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive...The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.展开更多
We propose a new trust region algorithm for nonlinear constrained optimization problems. In each iteration of our algorithm, the trial step is computed by minimizing a quadratic approximation to the augmented Lagrange...We propose a new trust region algorithm for nonlinear constrained optimization problems. In each iteration of our algorithm, the trial step is computed by minimizing a quadratic approximation to the augmented Lagrange function in the trust region. The augmented Lagrange function is also used as a merit function to decide whether the trial step should be accepted. Our method extends the traditional trust region approach by combining a filter technique into the rules for accepting trial steps so that a trial step could still be accepted even when it is rejected by the traditional rule based on merit function reduction. An estimate of the Lagrange multiplier is updated at each iteration, and the penalty parameter is updated to force sufficient reduction in the norm of the constraint violations. Active set technique is used to handle the inequality constraints. Numerical results for a set of constrained problems from the CUTEr collection are also reported.展开更多
A continuation algorithm for the solution of max-cut problems is proposed in this paper. Unlike the available semi-definite relaxation, a max-cut problem is converted into a continuous nonlinear programming by employi...A continuation algorithm for the solution of max-cut problems is proposed in this paper. Unlike the available semi-definite relaxation, a max-cut problem is converted into a continuous nonlinear programming by employing NCP functions, and the resulting nonlinear programming problem is then solved by using the augmented Lagrange penalty function method. The convergence property of the proposed algorithm is studied. Numerical experiments and comparisons with the Geomeans and Williamson randomized algorithm made on some max-cut test problems show that the algorithm generates satisfactory solutions for all the test problems with much less computation costs.展开更多
基金Supported by National Natural Science Foundation of China(Grant Nos.10831006,11021101)by CAS(Grant No.kjcx-yw-s7)
文摘The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.
基金supported by NSFC Grant 10831006CAS grant kjcx-yw-s7
文摘We propose a new trust region algorithm for nonlinear constrained optimization problems. In each iteration of our algorithm, the trial step is computed by minimizing a quadratic approximation to the augmented Lagrange function in the trust region. The augmented Lagrange function is also used as a merit function to decide whether the trial step should be accepted. Our method extends the traditional trust region approach by combining a filter technique into the rules for accepting trial steps so that a trial step could still be accepted even when it is rejected by the traditional rule based on merit function reduction. An estimate of the Lagrange multiplier is updated at each iteration, and the penalty parameter is updated to force sufficient reduction in the norm of the constraint violations. Active set technique is used to handle the inequality constraints. Numerical results for a set of constrained problems from the CUTEr collection are also reported.
基金Key Project supported by National Natural Science Foundation of China,10231060
文摘A continuation algorithm for the solution of max-cut problems is proposed in this paper. Unlike the available semi-definite relaxation, a max-cut problem is converted into a continuous nonlinear programming by employing NCP functions, and the resulting nonlinear programming problem is then solved by using the augmented Lagrange penalty function method. The convergence property of the proposed algorithm is studied. Numerical experiments and comparisons with the Geomeans and Williamson randomized algorithm made on some max-cut test problems show that the algorithm generates satisfactory solutions for all the test problems with much less computation costs.