In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under...In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under the convex assumption on the objective function,we preve the descenf property and the global convergence of this method.展开更多
In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without suffic...In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without sufficient decrease condition was proved. This paper investigates global convergence of nonmonotone conjugate gradient method under the same conditions.展开更多
This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Po...This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Polak- Ribière method and modified Hestenes- Stiefel method as special cases展开更多
Recently, Gilbert and Nocedal([3]) investigated global convergence of conjugate gradient methods related to Polak-Ribiere formular, they restricted beta(k) to non-negative value. [5] discussed the same problem as that...Recently, Gilbert and Nocedal([3]) investigated global convergence of conjugate gradient methods related to Polak-Ribiere formular, they restricted beta(k) to non-negative value. [5] discussed the same problem as that in [3] and relaxed beta(k) to be negative with the objective function being convex. This paper allows beta(k) to be selected in a wider range than [5]. Especially, the global convergence of the corresponding algorithm without sufficient decrease condition is proved.展开更多
As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initiall...As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.展开更多
Presents a study on methods for unconstrained optimization. Assumptions of the study; Main results; Convergence properties of the methods under simplified Armijo-type line search.
In this paper, we provide and analyze a new scaled conjugate gradient method and its performance, based on the modified secant equation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and on a new modified nonmo...In this paper, we provide and analyze a new scaled conjugate gradient method and its performance, based on the modified secant equation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and on a new modified nonmonotone line search technique. The method incorporates the modified BFGS secant equation in an effort to include the second order information of the objective function. The new secant equation has both gradient and function value information, and its update formula inherits the positive definiteness of Hessian approximation for general convex function. In order to improve the likelihood of finding a global optimal solution, we introduce a new modified nonmonotone line search technique. It is shown that, for nonsmooth convex problems, the proposed algorithm is globally convergent. Numerical results show that this new scaled conjugate gradient algorithm is promising and efficient for solving not only convex but also some large scale nonsmooth nonconvex problems in the sense of the Dolan-Moré performance profiles.展开更多
Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed...Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed.展开更多
In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Comb...In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.展开更多
In this paper, a new region of βk with respect to ;βk^PRP is given. With two Armijo-type line searches, the authors investigate the global convergence properties of the dependent PRP conjugate gradient methods, whic...In this paper, a new region of βk with respect to ;βk^PRP is given. With two Armijo-type line searches, the authors investigate the global convergence properties of the dependent PRP conjugate gradient methods, which extend the global convergence results of PRP conjugate gradient method proved by Grippo and Lucidi (1997) and Dai and Yuan (2002).展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
A subspace projected conjugate gradient method is proposed for solving large bound constrained quadratic programming. The conjugate gradient method is used to update the variables with indices outside of the active se...A subspace projected conjugate gradient method is proposed for solving large bound constrained quadratic programming. The conjugate gradient method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At every iterative level, the search direction consists of two parts, one of which is a subspace trumcated Newton direction, another is a modified gradient direction. With the projected search the algorithm is suitable to large problems. The convergence of the method is proved and same numerical tests with dimensions ranging from 5000 to 20000 are given.展开更多
Conjugate gradient method is one of successful methods for solving the unconstrained optimization problems. In this paper, absorbing the advantages of FR and CD methods, a hybrid conjugate gradient method is proposed....Conjugate gradient method is one of successful methods for solving the unconstrained optimization problems. In this paper, absorbing the advantages of FR and CD methods, a hybrid conjugate gradient method is proposed. Under the general Wolfe linear searches, the proposed method can generate the sufficient descent direction at each iterate,and its global convergence property also can be established. Some preliminary numerical results show that the proposed method is effective and stable for the given test problems.展开更多
Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search metho...Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data profile and performance profile are adopted to analyze the numerical experiments and the results show that the proposed methods are effective.展开更多
A hybrid method of the Polak-Ribière-Polyak (PRP) method and the Wei-Yao-Liu (WYL) method is proposed for unconstrained optimization pro- blems, which possesses the following properties: i) This method inherits a...A hybrid method of the Polak-Ribière-Polyak (PRP) method and the Wei-Yao-Liu (WYL) method is proposed for unconstrained optimization pro- blems, which possesses the following properties: i) This method inherits an important property of the well known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening;ii) The scalar holds automatically;iii) The global convergence with some line search rule is established for nonconvex functions. Numerical results show that the method is effective for the test problems.展开更多
Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, i...Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descent method.展开更多
基金This work is supported by the National Natural Science Foundation of China
文摘In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under the convex assumption on the objective function,we preve the descenf property and the global convergence of this method.
基金Supported by the National Science Foundation of China(10171055)
文摘In [3] Liu et al. investigated global convergence of conjugate gradient methods. In that paper they allowed βκ to be selected in a wider range and the global convergence of the corresponding algorithm without sufficient decrease condition was proved. This paper investigates global convergence of nonmonotone conjugate gradient method under the same conditions.
基金Supported by the National Natural Science Foundation of China(1 0 1 6 1 0 0 2 ) and Guangxi Natural Sci-ence Foundation (0 1 3 5 0 0 4 )
文摘This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Polak- Ribière method and modified Hestenes- Stiefel method as special cases
文摘Recently, Gilbert and Nocedal([3]) investigated global convergence of conjugate gradient methods related to Polak-Ribiere formular, they restricted beta(k) to non-negative value. [5] discussed the same problem as that in [3] and relaxed beta(k) to be negative with the objective function being convex. This paper allows beta(k) to be selected in a wider range than [5]. Especially, the global convergence of the corresponding algorithm without sufficient decrease condition is proved.
基金supported by the National Natural Science Foundation of China (No.72071202)the Key Laboratory of Mathematics and Engineering ApplicationsMinistry of Education。
文摘As a generalization of the two-term conjugate gradient method(CGM),the spectral CGM is one of the effective methods for solving unconstrained optimization.In this paper,we enhance the JJSL conjugate parameter,initially proposed by Jiang et al.(Computational and Applied Mathematics,2021,40:174),through the utilization of a convex combination technique.And this improvement allows for an adaptive search direction by integrating a newly constructed spectral gradient-type restart strategy.Then,we develop a new spectral CGM by employing an inexact line search to determine the step size.With the application of the weak Wolfe line search,we establish the sufficient descent property of the proposed search direction.Moreover,under general assumptions,including the employment of the strong Wolfe line search for step size calculation,we demonstrate the global convergence of our new algorithm.Finally,the given unconstrained optimization test results show that the new algorithm is effective.
基金the Chinese NSF grant 19801033 and a youth innovation fund of CAS.
文摘Presents a study on methods for unconstrained optimization. Assumptions of the study; Main results; Convergence properties of the methods under simplified Armijo-type line search.
文摘In this paper, we provide and analyze a new scaled conjugate gradient method and its performance, based on the modified secant equation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and on a new modified nonmonotone line search technique. The method incorporates the modified BFGS secant equation in an effort to include the second order information of the objective function. The new secant equation has both gradient and function value information, and its update formula inherits the positive definiteness of Hessian approximation for general convex function. In order to improve the likelihood of finding a global optimal solution, we introduce a new modified nonmonotone line search technique. It is shown that, for nonsmooth convex problems, the proposed algorithm is globally convergent. Numerical results show that this new scaled conjugate gradient algorithm is promising and efficient for solving not only convex but also some large scale nonsmooth nonconvex problems in the sense of the Dolan-Moré performance profiles.
基金This workwas partially supported by the National Natural Science Foundation of China (Grant Nos. 19525101, 19731010, 19801033 and 10171104), and also by an Innovation Fund of the Academy of Mathematics and System Sciences of CAS.
文摘Conjugate gradient methods are very important ones for solving nonlinear optimization problems,especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods wereusually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can beregarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed byDai et al. To analyze this class of methods, we introduce some unified tools that concern a general methodwith the scalarβk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods canuniformly be analyzed.
文摘In this paper, a new class of three term memory gradient method with non-monotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.
基金This work is supported by National Science Foundation of China(10571106)the Foundation of Qufu Normal University.
文摘In this paper, a new region of βk with respect to ;βk^PRP is given. With two Armijo-type line searches, the authors investigate the global convergence properties of the dependent PRP conjugate gradient methods, which extend the global convergence results of PRP conjugate gradient method proved by Grippo and Lucidi (1997) and Dai and Yuan (2002).
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金This research was supported by Chinese NNSF grant and NSF grant of Jiangsu Province
文摘A subspace projected conjugate gradient method is proposed for solving large bound constrained quadratic programming. The conjugate gradient method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At every iterative level, the search direction consists of two parts, one of which is a subspace trumcated Newton direction, another is a modified gradient direction. With the projected search the algorithm is suitable to large problems. The convergence of the method is proved and same numerical tests with dimensions ranging from 5000 to 20000 are given.
文摘Conjugate gradient method is one of successful methods for solving the unconstrained optimization problems. In this paper, absorbing the advantages of FR and CD methods, a hybrid conjugate gradient method is proposed. Under the general Wolfe linear searches, the proposed method can generate the sufficient descent direction at each iterate,and its global convergence property also can be established. Some preliminary numerical results show that the proposed method is effective and stable for the given test problems.
文摘Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data profile and performance profile are adopted to analyze the numerical experiments and the results show that the proposed methods are effective.
文摘A hybrid method of the Polak-Ribière-Polyak (PRP) method and the Wei-Yao-Liu (WYL) method is proposed for unconstrained optimization pro- blems, which possesses the following properties: i) This method inherits an important property of the well known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening;ii) The scalar holds automatically;iii) The global convergence with some line search rule is established for nonconvex functions. Numerical results show that the method is effective for the test problems.
基金Supported by the National Natural Science Foundation of China (No.19801033 and 10171104).
文摘Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods. Under these line searches, global convergence results are established for several famous conjugate gradient methods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descent method.