A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assum...A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.展开更多
In this paper, a new mixed quasi-Newton method for inequality constrained optimization problems is proposed. The feature of the method is that only the systems of linear equations are solved in each iteration, other t...In this paper, a new mixed quasi-Newton method for inequality constrained optimization problems is proposed. The feature of the method is that only the systems of linear equations are solved in each iteration, other than the quadratic programming, which decrease the amount of computations and is also efficient for large scale problem. Under some mild assumptions without the strict complementary condition., the method is globally and superlinearly convergent.展开更多
Abstract In this paper, the Broyden class of quasi-Newton methods for unconstrained optimization is investigated. Non-monotone linesearch procedure is introduced, which is combined with the Broyden's class. Under ...Abstract In this paper, the Broyden class of quasi-Newton methods for unconstrained optimization is investigated. Non-monotone linesearch procedure is introduced, which is combined with the Broyden's class. Under the convexity assumption on objective function, the global convergence of the Broyden's class is proved.展开更多
In this paper, we establish a quasi-Newton method for solving the KKT system arising from variational inequalities. The subproblems of the proposed method are lower-dimensional mixed linear complementarity problems. A...In this paper, we establish a quasi-Newton method for solving the KKT system arising from variational inequalities. The subproblems of the proposed method are lower-dimensional mixed linear complementarity problems. A suitable line search is introduced. We show that under suitable conditions, the proposed method converges globally and superlinearly.展开更多
This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal p...This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.展开更多
In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region sub...In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region subproblem. The local and global convergence results of the adaptive trust region method are proved.Numerical results indicate that the new method is very efficient.展开更多
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
Convergence properties of a class of multi-directional parallel quasi-Newton algorithms for the solution of unconstrained minimization problems are studied in this paper. At each iteration these algorithms generate se...Convergence properties of a class of multi-directional parallel quasi-Newton algorithms for the solution of unconstrained minimization problems are studied in this paper. At each iteration these algorithms generate several different quasi-Newton directions, and then apply line searches to determine step lengths along each direction, simultaneously. The next iterate is obtained among these trail points by choosing the lowest point in the sense of function reductions. Different quasi-Newton updating formulas from the Broyden family are used to generate a main sequence of Hessian matrix approximations. Based on the BFGS and the modified BFGS updating formulas, the global and superlinear convergence results are proved. It is observed that all the quasi-Newton directions asymptotically approach the Newton direction in both direction and length when the iterate sequence converges to a local minimum of the objective function, and hence the result of superlinear convergence follows.展开更多
In this paper, a Gauss-Newton-based Broyden’s class method for parameters of regression problems is presented. The global convergence of this given method will be established under suitable conditions. Numerical resu...In this paper, a Gauss-Newton-based Broyden’s class method for parameters of regression problems is presented. The global convergence of this given method will be established under suitable conditions. Numerical results show that the proposed method is interesting.展开更多
In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhan...In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported.展开更多
In this paper, the nonlinear minimax problems are discussed. By means of the Sequential Quadratic Programming (SQP), a new descent algorithm for solving the problems is presented. At each iteration of the proposed a...In this paper, the nonlinear minimax problems are discussed. By means of the Sequential Quadratic Programming (SQP), a new descent algorithm for solving the problems is presented. At each iteration of the proposed algorithm, a main search direction is obtained by solving a Quadratic Programming (QP) which always has a solution. In order to avoid the Maratos effect, a correction direction is obtained by updating the main direction with a simple explicit formula. Under mild conditions without the strict complementarity, the global and superlinear convergence of the algorithm can be obtained. Finally, some numerical experiments are reported.展开更多
This paper presents a new algorithm for optimization problems with nonlinear inequality constricts. At each iteration, the algorithm generates the search direction by solving only one quadratic programming (QP), and ...This paper presents a new algorithm for optimization problems with nonlinear inequality constricts. At each iteration, the algorithm generates the search direction by solving only one quadratic programming (QP), and then making a simple correction for the solution of the QP, moreover this new algorithm needn’t to do searching. The other advantage is that it may not only choose any point in En as a starting point, but also escape from the complex penalty function and diameter. moreover the iteration point will be a feasible descent sequence whenever some iteration point gets into the feasible region. So we call it subfeasible method.Under mild assumptions,the new algorithm is shown to possess global and two step superlinear convergence.展开更多
Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decre...Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decreasing gradient norms; Discussion on the assumption given; Applications of algorithms and numerical tests.展开更多
基金Supported by the National Natural Science Foundation of P.R.China(1 9971 0 0 2 ) and the Subject ofBeijing Educational Committ
文摘A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.
文摘In this paper, a new mixed quasi-Newton method for inequality constrained optimization problems is proposed. The feature of the method is that only the systems of linear equations are solved in each iteration, other than the quadratic programming, which decrease the amount of computations and is also efficient for large scale problem. Under some mild assumptions without the strict complementary condition., the method is globally and superlinearly convergent.
基金Partly supported by the National Natural Sciences Foundation of China (No. 19731001)National 973 Information Technology and High-Performance Software Program of China (No.G1998030401)K.C.Wong Education Foundation, Hong Kong.
文摘Abstract In this paper, the Broyden class of quasi-Newton methods for unconstrained optimization is investigated. Non-monotone linesearch procedure is introduced, which is combined with the Broyden's class. Under the convexity assumption on objective function, the global convergence of the Broyden's class is proved.
文摘In this paper, we establish a quasi-Newton method for solving the KKT system arising from variational inequalities. The subproblems of the proposed method are lower-dimensional mixed linear complementarity problems. A suitable line search is introduced. We show that under suitable conditions, the proposed method converges globally and superlinearly.
文摘This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.
基金The authors would like to thank Prof Y.-X. Yuan for providing the source programsfor ref. [16]. Zhang Xiangsun was supported by the National Natural Science Foundation of China (Grant No. 39830070) Hong Kong Baptist University Zhang Juliang was su
文摘In this paper, a new trust region subproblem is proposed. The trust radius in the new subproblem adjusts itself adaptively. As a result, an adaptive trust region method is constructed based on the new trust region subproblem. The local and global convergence results of the adaptive trust region method are proved.Numerical results indicate that the new method is very efficient.
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
基金This work is supported by National Science Foundation of China: 10231060.
文摘Convergence properties of a class of multi-directional parallel quasi-Newton algorithms for the solution of unconstrained minimization problems are studied in this paper. At each iteration these algorithms generate several different quasi-Newton directions, and then apply line searches to determine step lengths along each direction, simultaneously. The next iterate is obtained among these trail points by choosing the lowest point in the sense of function reductions. Different quasi-Newton updating formulas from the Broyden family are used to generate a main sequence of Hessian matrix approximations. Based on the BFGS and the modified BFGS updating formulas, the global and superlinear convergence results are proved. It is observed that all the quasi-Newton directions asymptotically approach the Newton direction in both direction and length when the iterate sequence converges to a local minimum of the objective function, and hence the result of superlinear convergence follows.
文摘In this paper, a Gauss-Newton-based Broyden’s class method for parameters of regression problems is presented. The global convergence of this given method will be established under suitable conditions. Numerical results show that the proposed method is interesting.
基金supported by the Foundation of National Natural Science Foundation of China under Grant No. 10871226the Natural Science Foundation of Shandong Province under Grant No. ZR2009AL006+1 种基金the Development Project Foundation for Science Research of Shandong Education Department under Grant No. J09LA05the Science Project Foundation of Liaocheng University under Grant No. X0810027
文摘In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported.
基金the National Natural Science Foundation of China(No.10261001)Guangxi Science Foundation(Nos.0236001,0640001)China as well as Guangxi University Key Program for Science and Technology Research(No.2005ZD02).
文摘In this paper, the nonlinear minimax problems are discussed. By means of the Sequential Quadratic Programming (SQP), a new descent algorithm for solving the problems is presented. At each iteration of the proposed algorithm, a main search direction is obtained by solving a Quadratic Programming (QP) which always has a solution. In order to avoid the Maratos effect, a correction direction is obtained by updating the main direction with a simple explicit formula. Under mild conditions without the strict complementarity, the global and superlinear convergence of the algorithm can be obtained. Finally, some numerical experiments are reported.
文摘This paper presents a new algorithm for optimization problems with nonlinear inequality constricts. At each iteration, the algorithm generates the search direction by solving only one quadratic programming (QP), and then making a simple correction for the solution of the QP, moreover this new algorithm needn’t to do searching. The other advantage is that it may not only choose any point in En as a starting point, but also escape from the complex penalty function and diameter. moreover the iteration point will be a feasible descent sequence whenever some iteration point gets into the feasible region. So we call it subfeasible method.Under mild assumptions,the new algorithm is shown to possess global and two step superlinear convergence.
基金This research is supported by Ministry of Education P.R.C. Asia-Pacific Operations Research Center (APORC).
文摘Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decreasing gradient norms; Discussion on the assumption given; Applications of algorithms and numerical tests.