In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumptio...The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.展开更多
In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under...In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under the convex assumption on the objective function,we preve the descenf property and the global convergence of this method.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessi...The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.展开更多
In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line se...A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line search technique and a composite-step approach, which decomposes the overall step as sum of a vertical step and a horizontal step. The algorithm includes critical modifications of horizontal step computation. One orthogonal projective matrix of the Jacobian of constraint functions is employed in trust-region subproblems. The orthogonal projection gives the null space of the trans- position of the Jacobian of the constraint function. Theoretical analysis shows that the new algorithm retains the global convergence to the first-order critical points under rather general conditions. The preliminary numerical results are reported.展开更多
The DFP method is one of the most famous numerical algorithms for unconstrained optimization. For uniformly convex objective functions convergence properties of the DFP method are studied. Several conditions that can ...The DFP method is one of the most famous numerical algorithms for unconstrained optimization. For uniformly convex objective functions convergence properties of the DFP method are studied. Several conditions that can ensure the global convergence of the DFP method are given.展开更多
Evolutionary computation is a kind of adaptive non--numerical computation method which is designed tosimulate evolution of nature. In this paper, evolutionary algorithm behavior is described in terms of theconstructio...Evolutionary computation is a kind of adaptive non--numerical computation method which is designed tosimulate evolution of nature. In this paper, evolutionary algorithm behavior is described in terms of theconstruction and evolution of the sampling distributions over the space of candidate solutions. Iterativeconstruction of the sampling distributions is based on the idea of the global random search of generationalmethods. Under this frame, propontional selection is characterized as a gobal search operator, and recombination is characerized as the search process that exploits similarities. It is shown-that by properly constraining the search breadth of recombination operators, weak convergence of evolutionary algorithms to aglobal optimum can be ensured.展开更多
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, thi...It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective.展开更多
In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding...In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation.展开更多
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the ...In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.展开更多
In this paper, the principle of Cuckoo algorithm is introduced, and the traditional Cuckoo algorithm is improved to establish a mathematical model of multi-objective optimization scheduling. Based on the improved algo...In this paper, the principle of Cuckoo algorithm is introduced, and the traditional Cuckoo algorithm is improved to establish a mathematical model of multi-objective optimization scheduling. Based on the improved algorithm, the model is optimized to a certain extent. Through analysis, it is proved that the improved algorithm has higher computational accuracy and can effectively improve the global convergence.展开更多
In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, ...In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems.展开更多
This paper presents a new conjugate gradient method for unconstrained opti-mization. This method reduces to the Polak-Ribiere-Polyak method when line searches areexact. But their performances are differellt in the cas...This paper presents a new conjugate gradient method for unconstrained opti-mization. This method reduces to the Polak-Ribiere-Polyak method when line searches areexact. But their performances are differellt in the case of inexact line search. By a simpleexample, we show that the Wolf e conditions do not ensure that the present method and thePolak- Ribiere- Polyak method will pro duce descent direct i0ns even u nder t h e ass umpt ionthat the objective function is Strictly convex. This result contradicts the F0lk axiom thatthe Polak-Ribiere-Polyak with the Wolf e line search should find the minimizer of a strictlyconvex objective function. Finally, we show that there are two ways to improve the newmethod such that it is globally convergent.展开更多
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ...The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).展开更多
In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our alg...In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.展开更多
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.
基金Sponsored by Natural Science Foundation of Beijing Municipal Commission of Education(Grant No.KM200510028019).
文摘The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.
基金This work is supported by the National Natural Science Foundation of China
文摘In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under the convex assumption on the objective function,we preve the descenf property and the global convergence of this method.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
基金supported by NSFC 10001031 and 70472074supported by NSERC Grant 283103
文摘The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.
基金This research is supported by the National Natural Science Foundation of China(10171055).
文摘In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
基金Supported by National Natural Science Foundation of China(Grant Nos.11671122 and 11371253)Key Scientific Research Project for Colleges and Universities in He’nan Province(Grant No.15A110031)+2 种基金Key Scientific and Technological Project of He’nan Province(Grant No.162102210069)Natural Science Foundation of He’nan Normal University(Grant No.2014QK04)Ph.D. Research Foundation of He’nan Normal University(Grant Nos.QD13041 and QD14155)
文摘A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line search technique and a composite-step approach, which decomposes the overall step as sum of a vertical step and a horizontal step. The algorithm includes critical modifications of horizontal step computation. One orthogonal projective matrix of the Jacobian of constraint functions is employed in trust-region subproblems. The orthogonal projection gives the null space of the trans- position of the Jacobian of the constraint function. Theoretical analysis shows that the new algorithm retains the global convergence to the first-order critical points under rather general conditions. The preliminary numerical results are reported.
基金the project "large-scale scientific computing" of State Commission of Science and Technology, China
文摘The DFP method is one of the most famous numerical algorithms for unconstrained optimization. For uniformly convex objective functions convergence properties of the DFP method are studied. Several conditions that can ensure the global convergence of the DFP method are given.
文摘Evolutionary computation is a kind of adaptive non--numerical computation method which is designed tosimulate evolution of nature. In this paper, evolutionary algorithm behavior is described in terms of theconstruction and evolution of the sampling distributions over the space of candidate solutions. Iterativeconstruction of the sampling distributions is based on the idea of the global random search of generationalmethods. Under this frame, propontional selection is characterized as a gobal search operator, and recombination is characerized as the search process that exploits similarities. It is shown-that by properly constraining the search breadth of recombination operators, weak convergence of evolutionary algorithms to aglobal optimum can be ensured.
文摘It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective.
文摘In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation.
基金Supported by the Key Project of 2010 Chongqing Higher Education Teaching Reform (Grant No. 102104)
文摘In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising.
文摘In this paper, the principle of Cuckoo algorithm is introduced, and the traditional Cuckoo algorithm is improved to establish a mathematical model of multi-objective optimization scheduling. Based on the improved algorithm, the model is optimized to a certain extent. Through analysis, it is proved that the improved algorithm has higher computational accuracy and can effectively improve the global convergence.
基金Foundation item: the National Natural Science Foundation of China (No. 60472071) the Science Foundation of Beijing Municipal Commission of Education (No. KM200710028001).
文摘In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems.
文摘This paper presents a new conjugate gradient method for unconstrained opti-mization. This method reduces to the Polak-Ribiere-Polyak method when line searches areexact. But their performances are differellt in the case of inexact line search. By a simpleexample, we show that the Wolf e conditions do not ensure that the present method and thePolak- Ribiere- Polyak method will pro duce descent direct i0ns even u nder t h e ass umpt ionthat the objective function is Strictly convex. This result contradicts the F0lk axiom thatthe Polak-Ribiere-Polyak with the Wolf e line search should find the minimizer of a strictlyconvex objective function. Finally, we show that there are two ways to improve the newmethod such that it is globally convergent.
文摘The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).
文摘In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions.