In this paper, we provide a counter example for a successful method, i.e. IMP-BOT method [6], based on ODE for unconstrained optimization. And we obtainthat methods based on BDF and the general trapezoidal metod for u...In this paper, we provide a counter example for a successful method, i.e. IMP-BOT method [6], based on ODE for unconstrained optimization. And we obtainthat methods based on BDF and the general trapezoidal metod for unconstrainedoptimization is bad efficient because these methods even if have A stability, not Lstability.展开更多
In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust r...In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.展开更多
In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direc...In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memory method is efficient for the given test problems.展开更多
In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, ...In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems.展开更多
文摘In this paper, we provide a counter example for a successful method, i.e. IMP-BOT method [6], based on ODE for unconstrained optimization. And we obtainthat methods based on BDF and the general trapezoidal metod for unconstrainedoptimization is bad efficient because these methods even if have A stability, not Lstability.
基金supported by National Natural Science Foundation of China(Grant Nos.60903088 and 11101115)the Natural Science Foundation of Hebei Province(Grant No.A2010000188)Doctoral Foundation of Hebei University(Grant No.2008136)
文摘In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.
基金This research is supported by the National Natural Science Foundation of China(10171055).
文摘In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
基金supported by the National Science Foundation of China under Grant No.70971076the Foundation of Shandong Provincial Education Department under Grant No.J10LA59
文摘In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memory method is efficient for the given test problems.
基金Foundation item: the National Natural Science Foundation of China (No. 60472071) the Science Foundation of Beijing Municipal Commission of Education (No. KM200710028001).
文摘In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems.