期刊文献+
共找到567篇文章
< 1 2 29 >
每页显示 20 50 100
Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization
1
作者 Cuiling Chen Liling Luo +1 位作者 Caihong Han Yu Chen 《Journal of Applied Mathematics and Physics》 2018年第1期130-137,共8页
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi... In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective. 展开更多
关键词 UNCONSTRAINED Optimization DESCENT Method line search global convergence
下载PDF
GLOBAL CONVERGENCE OF THE GENERAL THREE TERM CONJUGATE GRADIENT METHODS WITH THE RELAXED STRONG WOLFE LINE SEARCH
2
作者 Xu Zeshui Yue ZhenjunInstitute of Sciences,PLA University of Science and Technology,Nanjing,210016. 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2001年第1期58-62,共5页
The global convergence of the general three term conjugate gradient methods with the relaxed strong Wolfe line search is proved.
关键词 Conjugate gradient method inexact line search global convergence.
下载PDF
The global convergence of the non-quasi-Newton methods with non-monotone line search
3
作者 焦宝聪 刘洪伟 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第6期758-762,共5页
The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumptio... The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective. 展开更多
关键词 non-quasi-Newton method non-monotone line search global convergence unconstrained optimization
下载PDF
ON THE GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITH INEXACT LINESEARCH
4
作者 刘光辉 韩继业 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 1995年第2期147-153,共7页
In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under... In this paper we consider the global convergence of any conjugate gradient method of the form d1=-g1,dk+1=-gk+1+βkdk(k≥1)with any βk satisfying sume conditions,and with the strong wolfe line search conditions.Under the convex assumption on the objective function,we preve the descenf property and the global convergence of this method. 展开更多
关键词 CONJUGATE GRADIENT method STRONG Wolfe line search global convergence.
下载PDF
A New Class of Nonlinear Conjugate Gradient Methods with Global Convergence Properties 被引量:1
5
作者 陈忠 《长江大学学报(自科版)(上旬)》 CAS 2014年第3期I0001-I0003,共3页
非线性共轭梯度法由于其迭代简单和储存量小,且搜索方向不需要满足正割条件,在求解大规模无约束优化问题时占据及其重要的地位.提出了一类新的共轭梯度法,其搜索方向是目标函数的下降方向.若假设目标函数连续可微且梯度满足Lipschitz条... 非线性共轭梯度法由于其迭代简单和储存量小,且搜索方向不需要满足正割条件,在求解大规模无约束优化问题时占据及其重要的地位.提出了一类新的共轭梯度法,其搜索方向是目标函数的下降方向.若假设目标函数连续可微且梯度满足Lipschitz条件,线性搜索满足Wolfe原则,讨论了所设计算法的全局收敛性. 展开更多
关键词 摘要 编辑部 编辑工作 读者
下载PDF
A Descent Gradient Method and Its Global Convergence
6
作者 LIU Jin-kui 《Chinese Quarterly Journal of Mathematics》 CSCD 2014年第1期142-150,共9页
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de... Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
下载PDF
The Global Convergence of Self-Scaling BFGS Algorithm with Nonmonotone Line Search for Unconstrained Nonconvex Optimization Problems
7
作者 Hong Xia YIN Dong Lei DU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2007年第7期1233-1240,共8页
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessi... The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems. 展开更多
关键词 nonmonotone line search self-scaling BFGS method global convergence
原文传递
SUPERLINEAR CONVERGENCE OF THE DFP ALGORITHM WITHOUT EXACT LINE SEARCH
8
作者 濮定国 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2001年第3期430-432,共3页
关键词 dfp line SUPERlineAR convergence OF THE dfp algorithm WITHOUT EXACT line search
全文增补中
GLOBAL CONVERGENCE PROPERTIES OF THREE-TERM CONJUGATE GRADIENT METHOD WITH NEW-TYPE LINE SEARCH 被引量:13
9
作者 WANGChangyu DUShouqiang CHENYuanyuan 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2004年第3期412-420,共9页
In this paper, a new Wolfe-type line search and a new Armijo-type line searchare proposed, and some global convergence properties of a three-term conjugate gradient method withthe two line searches are proved.
关键词 unconstrained optimization line search three-term conjugate gradientmethod global convergence
原文传递
On the Global Convergence of a Projective Trust Region Algorithm for Nonlinear Equality Constrained Optimization
10
作者 Yong Gang PEI De Tong ZHU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2018年第12期1804-1828,共25页
A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line se... A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line search technique and a composite-step approach, which decomposes the overall step as sum of a vertical step and a horizontal step. The algorithm includes critical modifications of horizontal step computation. One orthogonal projective matrix of the Jacobian of constraint functions is employed in trust-region subproblems. The orthogonal projection gives the null space of the trans- position of the Jacobian of the constraint function. Theoretical analysis shows that the new algorithm retains the global convergence to the first-order critical points under rather general conditions. The preliminary numerical results are reported. 展开更多
关键词 Sequential quadratic programming TRUST-REGION filter line search PROJECTION global convergence
原文传递
Convergence of DFP algorithm 被引量:1
11
作者 袁亚湘 《Science China Mathematics》 SCIE 1995年第11期1281-1294,共14页
The DFP method is one of the most famous numerical algorithms for unconstrained optimization. For uniformly convex objective functions convergence properties of the DFP method are studied. Several conditions that can ... The DFP method is one of the most famous numerical algorithms for unconstrained optimization. For uniformly convex objective functions convergence properties of the DFP method are studied. Several conditions that can ensure the global convergence of the DFP method are given. 展开更多
关键词 global convergence dfp algorithm line search.
原文传递
AN ANALYSIS ABOUT BEHAVIOR OF EVOLUTIONARY ALGORITHMS:A KIND OF THEORETICAL DESCRIPTION BASED ON GLOBAL RANDOM SEARCH METHODS 被引量:1
12
作者 Ding Lixin Kang Lishan +1 位作者 Chen Yupin Zhou Shaoquan 《Wuhan University Journal of Natural Sciences》 CAS 1998年第1期31-31,共1页
Evolutionary computation is a kind of adaptive non--numerical computation method which is designed tosimulate evolution of nature. In this paper, evolutionary algorithm behavior is described in terms of theconstructio... Evolutionary computation is a kind of adaptive non--numerical computation method which is designed tosimulate evolution of nature. In this paper, evolutionary algorithm behavior is described in terms of theconstruction and evolution of the sampling distributions over the space of candidate solutions. Iterativeconstruction of the sampling distributions is based on the idea of the global random search of generationalmethods. Under this frame, propontional selection is characterized as a gobal search operator, and recombination is characerized as the search process that exploits similarities. It is shown-that by properly constraining the search breadth of recombination operators, weak convergence of evolutionary algorithms to aglobal optimum can be ensured. 展开更多
关键词 global random search evolutionary algorithms weak convergence genetic algorithms
下载PDF
A Line Search Algorithm for Unconstrained Optimization 被引量:1
13
作者 Gonglin Yuan Sha Lu Zengxin Wei 《Journal of Software Engineering and Applications》 2010年第5期503-509,共7页
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, thi... It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results show that the new method is effective. 展开更多
关键词 line search UNCONSTRAINED Optimization global convergence R-linear convergence
下载PDF
Several New Line Search Methods and Their Convergence
14
作者 Zhenjun Shi Kimberly Kendricks +1 位作者 Zhiwei Xu Yongning Tang 《American Journal of Operations Research》 2013年第5期421-430,共10页
In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding... In this paper, we propose several new line search rules for solving unconstrained minimization problems. These new line search rules can extend the accepted scope of step sizes to a wider extent than the corresponding original ones and give an adequate initial step size at each iteration. It is proved that the resulting line search algorithms have global convergence under some mild conditions. It is also proved that the search direction plays an important role in line search methods and that the step size approaches mainly guarantee global convergence in general cases. The convergence rate of these methods is also investigated. Some numerical results show that these new line search algorithms are effective in practical computation. 展开更多
关键词 UNCONSTRAINED MINIMIZATION line search Method global convergence convergence RATE
下载PDF
Global Convergence of a Modified Spectral CD Conjugate Gradient Method 被引量:7
15
作者 Wei CAO Kai Rong WANG Yi Li WANG 《Journal of Mathematical Research and Exposition》 CSCD 2011年第2期261-268,共8页
In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the ... In this paper,we present a new nonlinear modified spectral CD conjugate gradient method for solving large scale unconstrained optimization problems.The direction generated by the method is a descent direction for the objective function,and this property depends neither on the line search rule,nor on the convexity of the objective function.Moreover,the modified method reduces to the standard CD method if line search is exact.Under some mild conditions,we prove that the modified method with line search is globally convergent even if the objective function is nonconvex.Preliminary numerical results show that the proposed method is very promising. 展开更多
关键词 unconstrained optimization conjugate gradient method armijo-type line search global convergence
下载PDF
A Novel Cuckoo Search Algorithm and Its Application 被引量:1
16
作者 Ping Liu Shengjiang Zhang 《Open Journal of Applied Sciences》 2021年第9期1071-1081,共11页
In this paper, the principle of Cuckoo algorithm is introduced, and the traditional Cuckoo algorithm is improved to establish a mathematical model of multi-objective optimization scheduling. Based on the improved algo... In this paper, the principle of Cuckoo algorithm is introduced, and the traditional Cuckoo algorithm is improved to establish a mathematical model of multi-objective optimization scheduling. Based on the improved algorithm, the model is optimized to a certain extent. Through analysis, it is proved that the improved algorithm has higher computational accuracy and can effectively improve the global convergence. 展开更多
关键词 Cuckoo search algorithm Feature Selection Infrared Spectrum global convergence
下载PDF
Derivation and Global Convergence for Memoryless Non-quasi-Newton Method
17
作者 JIAO Bao Cong YU Jing Jing CHEN Lan Ping 《Journal of Mathematical Research and Exposition》 CSCD 2009年第3期423-433,共11页
In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, ... In this paper, a new class of memoryless non-quasi-Newton method for solving unconstrained optimization problems is proposed, and the global convergence of this method with inexact line search is proved. Furthermore, we propose a hybrid method that mixes both the memoryless non-quasi-Newton method and the memoryless Perry-Shanno quasi-Newton method. The global convergence of this hybrid memoryless method is proved under mild assumptions. The initial results show that these new methods are efficient for the given test problems. Especially the memoryless non-quasi-Newton method requires little storage and computation, so it is able to efficiently solve large scale optimization problems. 展开更多
关键词 memoryless non-quasi-Newton method Wolfe line search global convergence.
下载PDF
A NEW CONJUGATE GRADIENT METHOD AND ITS GLOBAL CONVERGENCE PROPERTIES 被引量:2
18
作者 LI Zhengfeng CHEN Jing DENG Naiyang(Division of Basic Sciences, China Agricultural University East Campus, Beijing 100083, China) 《Systems Science and Mathematical Sciences》 SCIE EI CSCD 1998年第1期53-60,共8页
This paper presents a new conjugate gradient method for unconstrained opti-mization. This method reduces to the Polak-Ribiere-Polyak method when line searches areexact. But their performances are differellt in the cas... This paper presents a new conjugate gradient method for unconstrained opti-mization. This method reduces to the Polak-Ribiere-Polyak method when line searches areexact. But their performances are differellt in the case of inexact line search. By a simpleexample, we show that the Wolf e conditions do not ensure that the present method and thePolak- Ribiere- Polyak method will pro duce descent direct i0ns even u nder t h e ass umpt ionthat the objective function is Strictly convex. This result contradicts the F0lk axiom thatthe Polak-Ribiere-Polyak with the Wolf e line search should find the minimizer of a strictlyconvex objective function. Finally, we show that there are two ways to improve the newmethod such that it is globally convergent. 展开更多
关键词 CONJUGATE GRADIENT method global convergence UNCONSTRAINED optimization line searches.
原文传递
Global convergence of quasi-Newton methods for unconstrained optimization
19
作者 韩立兴 刘光辉 《Chinese Science Bulletin》 SCIE EI CAS 1996年第7期529-533,共5页
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ... The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]). 展开更多
关键词 QUASI-NEWTON methods UNCONSTRAINED optimization dfp algorithm global convergence.
原文传递
A Non-Monotone Trust Region Method with Non-Monotone Wolfe-Type Line Search Strategy for Unconstrained Optimization
20
作者 Changyuan Li Qinghua Zhou Xiao Wu 《Journal of Applied Mathematics and Physics》 2015年第6期707-712,共6页
In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our alg... In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions. 展开更多
关键词 UNCONSTRAINED Optimization Non-Monotone TRUST Region Method Non-Monotone line search global convergence
下载PDF
上一页 1 2 29 下一页 到第
使用帮助 返回顶部