In this paper, a new trust region algorithm for unconstrained LC1 optimization problems is given. Compare with those existing trust regiion methods, this algorithm has a different feature: it obtains a stepsize at eac...In this paper, a new trust region algorithm for unconstrained LC1 optimization problems is given. Compare with those existing trust regiion methods, this algorithm has a different feature: it obtains a stepsize at each iteration not by soloving a quadratic subproblem with a trust region bound, but by solving a system of linear equations. Thus it reduces computational complexity and improves computation efficiency. It is proven that this algorithm is globally convergent and locally superlinear under some conditions.展开更多
A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assum...A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.展开更多
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian...We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.展开更多
文摘In this paper, a new trust region algorithm for unconstrained LC1 optimization problems is given. Compare with those existing trust regiion methods, this algorithm has a different feature: it obtains a stepsize at each iteration not by soloving a quadratic subproblem with a trust region bound, but by solving a system of linear equations. Thus it reduces computational complexity and improves computation efficiency. It is proven that this algorithm is globally convergent and locally superlinear under some conditions.
基金Supported by the National Natural Science Foundation of P.R.China(1 9971 0 0 2 ) and the Subject ofBeijing Educational Committ
文摘A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.
基金the National Science Foundation Grant (10871130) of Chinathe Ph.D.Foundation Grant (0527003)+1 种基金the Shanghai Leading Academic Discipline Project (T0401)the Science Foundation Grant (05DZ11) of Shanghai Education Committee
文摘We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.