In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line...In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.展开更多
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the propos...In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on a...This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.展开更多
In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucid...In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our alg...In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.展开更多
Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constru...Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constructs a quadratic interpolation curve to approximate some ODE trajectory;it finally determines a new point (corrector) by searching along the quadratic curve. In particular, this paper gives a global convergence analysis for schemes associated with the quasi-Newton updates. In our computational experiments, the new schemes using DFP and BFGS updates outperformed their conventional counterparts on a set of standard test problems.展开更多
In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhan...In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported.展开更多
Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining wit...Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.展开更多
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ...The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).展开更多
This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are glob...This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are globally convergent for general convex functions.展开更多
Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is pre...Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive.展开更多
In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust r...In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.展开更多
In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hes...In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.展开更多
The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumptio...The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.展开更多
文摘In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.
文摘In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
文摘We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
基金Supported by 2023 Inner Mongolia University of Finance and Economics,General Scientific Research for Universities directly under Inner Mon‐golia,China (NCYWT23026)2024 High-quality Research Achievements Cultivation Fund Project of Inner Mongolia University of Finance and Economics,China (GZCG2479)。
文摘This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.
文摘In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
文摘In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.
文摘Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constructs a quadratic interpolation curve to approximate some ODE trajectory;it finally determines a new point (corrector) by searching along the quadratic curve. In particular, this paper gives a global convergence analysis for schemes associated with the quasi-Newton updates. In our computational experiments, the new schemes using DFP and BFGS updates outperformed their conventional counterparts on a set of standard test problems.
基金supported by the Foundation of National Natural Science Foundation of China under Grant No. 10871226the Natural Science Foundation of Shandong Province under Grant No. ZR2009AL006+1 种基金the Development Project Foundation for Science Research of Shandong Education Department under Grant No. J09LA05the Science Project Foundation of Liaocheng University under Grant No. X0810027
文摘In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported.
基金supported by the National Natural Science Foundation of China(Nos.12171051 and 11871115)。
文摘Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.
文摘The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).
文摘This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are globally convergent for general convex functions.
基金Research partly supported by Chinese NSF grants 19731001 and 19801033. The second author gratefully acknowledges the support of Natoinal 973 Information Fechnology and High-Performance Software Program of China with grant No. G1998030401 and K. C. Wong E
文摘Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive.
基金supported by National Natural Science Foundation of China(Grant Nos.60903088 and 11101115)the Natural Science Foundation of Hebei Province(Grant No.A2010000188)Doctoral Foundation of Hebei University(Grant No.2008136)
文摘In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.
文摘In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.
基金Sponsored by Natural Science Foundation of Beijing Municipal Commission of Education(Grant No.KM200510028019).
文摘The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.