期刊文献+
共找到448篇文章
< 1 2 23 >
每页显示 20 50 100
Global Convergence of Curve Search Methods for Unconstrained Optimization
1
作者 Zhiwei Xu Yongning Tang Zhen-Jun Shi 《Applied Mathematics》 2016年第7期721-735,共15页
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line... In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems. 展开更多
关键词 unconstrained optimization Curve Search method Global convergence convergence Rate
下载PDF
GLOBAL COVERGENCE OF THE NON-QUASI-NEWTON METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS 被引量:6
2
作者 Liu Hongwei Wang Mingjie +1 位作者 Li Jinshan Zhang Xiangsun 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2006年第3期276-288,共13页
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ... In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient. 展开更多
关键词 non-quasi-Newton method inexact line search global convergence unconstrained optimization superlinear convergence.
下载PDF
Global Convergence of an Extended Descent Algorithm without Line Search for Unconstrained Optimization 被引量:1
3
作者 Cuiling Chen Liling Luo +1 位作者 Caihong Han Yu Chen 《Journal of Applied Mathematics and Physics》 2018年第1期130-137,共8页
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi... In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective. 展开更多
关键词 unconstrained optimization DESCENT method Line SEARCH Global convergence
下载PDF
New Variants of Newton’s Method for Nonlinear Unconstrained Optimization Problems
4
作者 V. KANWAR Kapil K. SHARMA Ramandeep BEHL 《Intelligent Information Management》 2010年第1期40-45,共6页
In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the propos... In this paper, we propose new variants of Newton’s method based on quadrature formula and power mean for solving nonlinear unconstrained optimization problems. It is proved that the order of convergence of the proposed family is three. Numerical comparisons are made to show the performance of the presented methods. Furthermore, numerical experiments demonstrate that the logarithmic mean Newton’s method outperform the classical Newton’s and other variants of Newton’s method. MSC: 65H05. 展开更多
关键词 unconstrained optimization Newton’s method order of convergence power MEANS INITIAL GUESS
下载PDF
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
5
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
下载PDF
An Improved Quasi-Newton Method for Unconstrained Optimization
6
作者 Fei Pusheng Chen Zhong (Department of Mathematics, Wuhan University, Wuhan 430072, China) 《Wuhan University Journal of Natural Sciences》 CAS 1996年第1期35-37,共3页
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
关键词 quasi-Newton method superlinear convergence unconstrained optimization
下载PDF
A New Two-Parameter Family of Nonlinear Conjugate Gradient Method Without Line Search for Unconstrained Optimization Problem
7
作者 ZHU Tiefeng 《Wuhan University Journal of Natural Sciences》 CAS 2024年第5期403-411,共9页
This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on a... This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective. 展开更多
关键词 unconstrained optimization conjugate gradient method without line search global convergence
原文传递
Modified LS Method for Unconstrained Optimization
8
作者 Jinkui Liu Li Zheng 《Applied Mathematics》 2011年第6期779-782,共4页
In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucid... In this paper, a new conjugate gradient formula and its algorithm for solving unconstrained optimization problems are proposed. The given formula satisfies with satisfying the descent condition. Under the Grippo-Lucidi line search, the global convergence property of the given method is discussed. The numerical results show that the new method is efficient for the given test problems. 展开更多
关键词 unconstrained optimization CONJUGATE GRADIENT method Grippo-Lucidi Line SEARCH Global convergence
下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
9
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale unconstrained optimization CONJUGATE Gradient method SUFFICIENT DESCENT Property Globally CONVERGENT
下载PDF
A Non-Monotone Trust Region Method with Non-Monotone Wolfe-Type Line Search Strategy for Unconstrained Optimization
10
作者 Changyuan Li Qinghua Zhou Xiao Wu 《Journal of Applied Mathematics and Physics》 2015年第6期707-712,共6页
In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our alg... In this paper, we propose and analyze a non-monotone trust region method with non-monotone line search strategy for unconstrained optimization problems. Unlike the traditional non-monotone trust region method, our algorithm utilizes non-monotone Wolfe line search to get the next point if a trial step is not adopted. Thus, it can reduce the number of solving sub-problems. Theoretical analysis shows that the new proposed method has a global convergence under some mild conditions. 展开更多
关键词 unconstrained optimization Non-Monotone TRUST Region method Non-Monotone Line Search Global convergence
下载PDF
Higher Order Iteration Schemes for Unconstrained Optimization
11
作者 Yangyang Shi Pingqi Pan 《American Journal of Operations Research》 2011年第3期73-83,共11页
Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constru... Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constructs a quadratic interpolation curve to approximate some ODE trajectory;it finally determines a new point (corrector) by searching along the quadratic curve. In particular, this paper gives a global convergence analysis for schemes associated with the quasi-Newton updates. In our computational experiments, the new schemes using DFP and BFGS updates outperformed their conventional counterparts on a set of standard test problems. 展开更多
关键词 unconstrained optimization ITERATION Scheme ODE method QUASI-NEWTON Update convergence Analysis
下载PDF
THE CONVERGENCE OF A NEW MODIFIED BFGS METHOD WITHOUT LINE SEARCHES FOR UNCONSTRAINED OPTIMIZATION OR COMPLEXITY SYSTEMS
12
作者 Liying LIU Zengxin WEI Xiaoping WU 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2010年第4期861-872,共12页
In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhan... In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported. 展开更多
关键词 BFGS method complexity systems global convergence superlinear convergence unconstrained optimization.
原文传递
Cyclic Gradient Methods for Unconstrained Optimization
13
作者 Ya Zhang Cong Sun 《Journal of the Operations Research Society of China》 EI CSCD 2024年第3期809-828,共20页
Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining wit... Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts. 展开更多
关键词 Gradient method unconstrained optimization Nonmonotonic line search Global convergence
原文传递
Global convergence of quasi-Newton methods for unconstrained optimization
14
作者 韩立兴 刘光辉 《Chinese Science Bulletin》 SCIE EI CAS 1996年第7期529-533,共5页
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ... The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]). 展开更多
关键词 QUASI-NEWTON methods unconstrained optimization DFP algorithm global convergence.
原文传递
GLOBAL CONVERGENCE OF A CLASS OF OPTIMALLY CONDITIONED SSVM METHODS
15
作者 杨正方 夏爱生 +1 位作者 韩立兴 刘光辉 《Transactions of Tianjin University》 EI CAS 1997年第1期73-76,共4页
This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are glob... This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are globally convergent for general convex functions. 展开更多
关键词 optimally conditioned self scaling variable metric methods global convergence unconstrained optimization
下载PDF
A NEW FAMILY OF TRUST REGION ALGORITHMS FOR UNCONSTRAINED OPTIMIZATION 被引量:5
16
作者 Yuhong Dai Dachuan Xu(State Key Laboratory of Scientific/Engineering Computing, Institute of Computational Mathematicsand Scientific/Engineering Computing, Academy of Mathematics and System Sciences, ChineseAcademy of Sciences, P.O. Box 2719, Beijing 100080, China) 《Journal of Computational Mathematics》 SCIE CSCD 2003年第2期221-228,共8页
Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is pre... Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive. 展开更多
关键词 trust region method global convergence quasi-Newton method unconstrained optimization nonlinear programming.
原文传递
An improved trust region method for unconstrained optimization 被引量:5
17
作者 ZHOU QingHua ZHANG YaRui +2 位作者 XU FengXia GENG Yan SUN XiaoDian 《Science China Mathematics》 SCIE 2013年第2期425-434,共10页
In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust r... In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm. 展开更多
关键词 unconstrained optimization trust region methods global convergence negative gradient direction ITERATIVE
原文传递
A New Restarting Adaptive Trust-Region Method for Unconstrained Optimization 被引量:1
18
作者 Morteza Kimiaei Susan Ghaderi 《Journal of the Operations Research Society of China》 EI CSCD 2017年第4期487-507,共21页
In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hes... In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods. 展开更多
关键词 unconstrained optimization Trust-region methods Nonmonotone technique Adaptive radius Theoretical convergence
原文传递
A New Class of Nonlinear Conjugate Gradient Methods with Global Convergence Properties 被引量:1
19
作者 陈忠 《长江大学学报(自科版)(上旬)》 CAS 2014年第3期I0001-I0003,共3页
非线性共轭梯度法由于其迭代简单和储存量小,且搜索方向不需要满足正割条件,在求解大规模无约束优化问题时占据及其重要的地位.提出了一类新的共轭梯度法,其搜索方向是目标函数的下降方向.若假设目标函数连续可微且梯度满足Lipschitz条... 非线性共轭梯度法由于其迭代简单和储存量小,且搜索方向不需要满足正割条件,在求解大规模无约束优化问题时占据及其重要的地位.提出了一类新的共轭梯度法,其搜索方向是目标函数的下降方向.若假设目标函数连续可微且梯度满足Lipschitz条件,线性搜索满足Wolfe原则,讨论了所设计算法的全局收敛性. 展开更多
关键词 摘要 编辑部 编辑工作 读者
下载PDF
The global convergence of the non-quasi-Newton methods with non-monotone line search
20
作者 焦宝聪 刘洪伟 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第6期758-762,共5页
The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumptio... The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective. 展开更多
关键词 non-quasi-Newton method non-monotone line search global convergence unconstrained optimization
下载PDF
上一页 1 2 23 下一页 到第
使用帮助 返回顶部