In this paper, a rank-one updated method for solving symmetric nonlinear equations is proposed. This method possesses some features: 1) The updated matrix is positive definite whatever line search technique is used;2)...In this paper, a rank-one updated method for solving symmetric nonlinear equations is proposed. This method possesses some features: 1) The updated matrix is positive definite whatever line search technique is used;2) The search direction is descent for the norm function;3) The global convergence of the given method is established under reasonable conditions. Numerical results show that the presented method is interesting.展开更多
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line...In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step...The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.展开更多
This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)...This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)based on the iteration framework of ADMM and the Bregman distance.Then,we utilize the smooth performance of one of the components to develop a linearized version of it.Compared to the traditional ADMM,both proposed methods integrate a convex combination strategy into the multiplier update step.For each proposed method,we demonstrate the convergence of the entire iteration sequence to a unique critical point of the augmented Lagrangian function utilizing the powerful Kurdyka–Łojasiewicz property,and we also derive convergence rates for both the sequence of merit function values and the iteration sequence.Finally,some numerical results show that the proposed methods are effective and encouraging for the Lasso model.展开更多
The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an u...The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective.展开更多
Alternating direction method of multipliers(ADMM)receives much attention in the recent years due to various demands from machine learning and big data related optimization.In 2013,Ouyang et al.extend the ADMM to the s...Alternating direction method of multipliers(ADMM)receives much attention in the recent years due to various demands from machine learning and big data related optimization.In 2013,Ouyang et al.extend the ADMM to the stochastic setting for solving some stochastic optimization problems,inspired by the structural risk minimization principle.In this paper,we consider a stochastic variant of symmetric ADMM,named symmetric stochastic linearized ADMM(SSL-ADMM).In particular,using the framework of variational inequality,we analyze the convergence properties of SSL-ADMM.Moreover,we show that,with high probability,SSL-ADMM has O((ln N)·N^(-1/2))constraint violation bound and objective error bound for convex problems,and has O((ln N)^(2)·N^(-1))constraint violation bound and objective error bound for strongly convex problems,where N is the iteration number.Symmetric ADMM can improve the algorithmic performance compared to classical ADMM,numerical experiments for statistical machine learning show that such an improvement is also present in the stochastic setting.展开更多
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order a...This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity...In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity of the objective function used. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that all these three new hybrid methods are efficient for the given test problems.展开更多
Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search metho...Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data profile and performance profile are adopted to analyze the numerical experiments and the results show that the proposed methods are effective.展开更多
The alternating direction method of multipliers(ADMM)is widely used in solving structured convex optimization problems.Despite its success in practice,the convergence of the standard ADMM for minimizing the sum of N(N...The alternating direction method of multipliers(ADMM)is widely used in solving structured convex optimization problems.Despite its success in practice,the convergence of the standard ADMM for minimizing the sum of N(N≥3)convex functions,whose variables are linked by linear constraints,has remained unclear for a very long time.Recently,Chen et al.(Math Program,doi:10.1007/s10107-014-0826-5,2014)provided a counter-example showing that the ADMM for N≥3 may fail to converge without further conditions.Since the ADMM for N≥3 has been very successful when applied to many problems arising from real practice,it is worth further investigating under what kind of sufficient conditions it can be guaranteed to converge.In this paper,we present such sufficient conditions that can guarantee the sublinear convergence rate for the ADMM for N≥3.Specifically,we show that if one of the functions is convex(not necessarily strongly convex)and the other N-1 functions are strongly convex,and the penalty parameter lies in a certain region,the ADMM converges with rate O(1/t)in a certain ergodic sense and o(1/t)in a certain non-ergodic sense,where t denotes the number of iterations.As a by-product,we also provide a simple proof for the O(1/t)convergence rate of two-blockADMMin terms of both objective error and constraint violation,without assuming any condition on the penalty parameter and strong convexity on the functions.展开更多
In this paper, the nonlinear optimization problems with inequality constraints are discussed. Combining the ideas of the strongly sub-feasible directions method and the s-generalized projection technique, a new algori...In this paper, the nonlinear optimization problems with inequality constraints are discussed. Combining the ideas of the strongly sub-feasible directions method and the s-generalized projection technique, a new algorithm starting with an arbitrary initial iteration point for the discussed problems is presented. At each iteration, the search direction is generated by a new s-generalized projection explicit formula, and the step length is yielded by a new Armijo line search. Under some necessary assumptions, not only the algorithm possesses global and strong convergence, but also the iterative points always get into the feasible set after finite iterations. Finally, some preliminary numerical results are reported.展开更多
文摘In this paper, a rank-one updated method for solving symmetric nonlinear equations is proposed. This method possesses some features: 1) The updated matrix is positive definite whatever line search technique is used;2) The search direction is descent for the norm function;3) The global convergence of the given method is established under reasonable conditions. Numerical results show that the presented method is interesting.
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.
文摘In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
基金The authors are grateful for the valuable comments and suggestions of two anonymous refereesThe authors also would like to thank Dr. Hui Zhang in National University of Defense Technology for his many suggestions and comments on an early draft of this paper+1 种基金This research is supported by the Chinese Natural Science Foundation (Nos. 11631013, 11971372)the National 973 Program of China (Nos. 2015CB856002).
文摘The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.
基金the National Natural Science Foundation of China(Nos.12171106 and 72071202)the Natural Science Foundation of Guangxi Province(No.2020GXNSFDA238017)Key Laboratory of Mathematics and Engineering Applications,Ministry of Education.
文摘This work explores a family of two-block nonconvex optimization problems subject to linear constraints.We first introduce a simple but universal Bregman-style improved alternating direction method of multipliers(ADMM)based on the iteration framework of ADMM and the Bregman distance.Then,we utilize the smooth performance of one of the components to develop a linearized version of it.Compared to the traditional ADMM,both proposed methods integrate a convex combination strategy into the multiplier update step.For each proposed method,we demonstrate the convergence of the entire iteration sequence to a unique critical point of the augmented Lagrangian function utilizing the powerful Kurdyka–Łojasiewicz property,and we also derive convergence rates for both the sequence of merit function values and the iteration sequence.Finally,some numerical results show that the proposed methods are effective and encouraging for the Lasso model.
基金supported by the National Natural Science Foundation of China (No.12171106)the Natural Science Foundation of Guangxi Province (No.2020GXNSFDA238017)。
文摘The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective.
基金Supported by National Natural Science Foundation of China (61662036)。
文摘Alternating direction method of multipliers(ADMM)receives much attention in the recent years due to various demands from machine learning and big data related optimization.In 2013,Ouyang et al.extend the ADMM to the stochastic setting for solving some stochastic optimization problems,inspired by the structural risk minimization principle.In this paper,we consider a stochastic variant of symmetric ADMM,named symmetric stochastic linearized ADMM(SSL-ADMM).In particular,using the framework of variational inequality,we analyze the convergence properties of SSL-ADMM.Moreover,we show that,with high probability,SSL-ADMM has O((ln N)·N^(-1/2))constraint violation bound and objective error bound for convex problems,and has O((ln N)^(2)·N^(-1))constraint violation bound and objective error bound for strongly convex problems,where N is the iteration number.Symmetric ADMM can improve the algorithmic performance compared to classical ADMM,numerical experiments for statistical machine learning show that such an improvement is also present in the stochastic setting.
基金supported by NSFC Grant 10601043,NCETXMUSRF for ROCS,SEM+2 种基金supported by RGC 201508HKBU FRGssupported by the Hong Kong Research Grant Council
文摘This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function.We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function.Under a local Lipschitzian error bound assumption,we show that the algorithm possesses global and local linear convergence properties.We also give some numerical tests(including image recovery examples) to illustrate the efficiency of the proposed method.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
文摘In this paper, three new hybrid nonlinear conjugate gradient methods are presented, which produce suf?cient descent search direction at every iteration. This property is independent of any line search or the convexity of the objective function used. Under suitable conditions, we prove that the proposed methods converge globally for general nonconvex functions. The numerical results show that all these three new hybrid methods are efficient for the given test problems.
文摘Three PRP-type direct search methods for unconstrained optimization are presented. The methods adopt three kinds of recently developed descent conjugate gradient methods and the idea of frame-based direct search method. Global convergence is shown for continuously differentiable functions. Data profile and performance profile are adopted to analyze the numerical experiments and the results show that the proposed methods are effective.
基金The research of S.-Q.Ma was supported in part by the Hong Kong Research Grants Council General Research Fund Early Career Scheme(No.CUHK 439513)The research of S.-Z.Zhang was supported in part by the National Natural Science Foundation(No.CMMI 1161242).
文摘The alternating direction method of multipliers(ADMM)is widely used in solving structured convex optimization problems.Despite its success in practice,the convergence of the standard ADMM for minimizing the sum of N(N≥3)convex functions,whose variables are linked by linear constraints,has remained unclear for a very long time.Recently,Chen et al.(Math Program,doi:10.1007/s10107-014-0826-5,2014)provided a counter-example showing that the ADMM for N≥3 may fail to converge without further conditions.Since the ADMM for N≥3 has been very successful when applied to many problems arising from real practice,it is worth further investigating under what kind of sufficient conditions it can be guaranteed to converge.In this paper,we present such sufficient conditions that can guarantee the sublinear convergence rate for the ADMM for N≥3.Specifically,we show that if one of the functions is convex(not necessarily strongly convex)and the other N-1 functions are strongly convex,and the penalty parameter lies in a certain region,the ADMM converges with rate O(1/t)in a certain ergodic sense and o(1/t)in a certain non-ergodic sense,where t denotes the number of iterations.As a by-product,we also provide a simple proof for the O(1/t)convergence rate of two-blockADMMin terms of both objective error and constraint violation,without assuming any condition on the penalty parameter and strong convexity on the functions.
基金supported by the National Natural Science Foundation of China under Grant Nos.71061002 and 10771040the Project supported by Guangxi Science Foundation under Grant No.0832052Science Foundation of Guangxi Education Department under Grant No.200911MS202
文摘In this paper, the nonlinear optimization problems with inequality constraints are discussed. Combining the ideas of the strongly sub-feasible directions method and the s-generalized projection technique, a new algorithm starting with an arbitrary initial iteration point for the discussed problems is presented. At each iteration, the search direction is generated by a new s-generalized projection explicit formula, and the step length is yielded by a new Armijo line search. Under some necessary assumptions, not only the algorithm possesses global and strong convergence, but also the iterative points always get into the feasible set after finite iterations. Finally, some preliminary numerical results are reported.