期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
LIMITED MEMORY BFGS METHOD BY USING LINEAR INDEPENDENT SEARCH DIRECTIONS
1
作者 倪勤 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2001年第2期236-239,共4页
The degree of numerical linear independence is proposed and discussed. Based on this linear independence theory, a modified limited memory BFGS method is deve loped. Similar to the standard limited memory method, thi... The degree of numerical linear independence is proposed and discussed. Based on this linear independence theory, a modified limited memory BFGS method is deve loped. Similar to the standard limited memory method, this new method determines the new update by applying the updating formula m times to an initial positive diagonal matrix using the m previous pairs of the change in iteration and gradient. Besides the most recent pair of the change, which guarantees the quadratic termination, the choice of the other ( m -1) pairs of the change in the new method is dependent on the degree of numerical linear independence of previous search directions. In addition, the numerical linear independence theory is further discussed and the computation of the degree of linear independence is simplified. Theoretical and numerical results show that this new modified method improves efficiently the standard limited memory method. 展开更多
关键词 unconstrained optimization limited memory method bfgs method degree of linear independence
下载PDF
BFGS Method for Linear Programming
2
作者 A. Amir A. Yassine 《Journal of Mathematics and System Science》 2015年第12期537-543,共7页
In this paper we propose an algorithm based on the BFGS Quasi-Newton method to solve a linear program. The choice of this method is justified by its theoretical efficiency, the ease to determine a descent direction an... In this paper we propose an algorithm based on the BFGS Quasi-Newton method to solve a linear program. The choice of this method is justified by its theoretical efficiency, the ease to determine a descent direction and its fast convergence towards an optimal solution. Our proposed method is compared with Newton's method for linear program named lpnew, widely used as an optimization algorithm for classification problems. 展开更多
关键词 Linear programming bfgs method least norm solution exterior penalty.
下载PDF
LIMITED MEMORY BFGS METHOD FOR NONLINEAR MONOTONE EQUATIONS 被引量:4
3
作者 Weijun Zhou Donghui Li 《Journal of Computational Mathematics》 SCIE EI CSCD 2007年第1期89-96,共8页
In this paper, we propose an algorithm for solving nonlinear monotone equations by combining the limited memory BFGS method (L-BFGS) with a projection method. We show that the method is globally convergent if the eq... In this paper, we propose an algorithm for solving nonlinear monotone equations by combining the limited memory BFGS method (L-BFGS) with a projection method. We show that the method is globally convergent if the equation involves a Lipschitz continuous monotone function. We also present some preliminary numerical results. 展开更多
关键词 Limited memory bfgs method Monotone function Projection method.
原文传递
THE CONVERGENCE OF A NEW MODIFIED BFGS METHOD WITHOUT LINE SEARCHES FOR UNCONSTRAINED OPTIMIZATION OR COMPLEXITY SYSTEMS
4
作者 Liying LIU Zengxin WEI Xiaoping WU 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2010年第4期861-872,共12页
In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhan... In this paper,a new modified BFGS method without line searches is proposed.Unlike traditionalBFGS method,this modified BFGS method is proposed based on the so-called fixed steplengthstrategy introduced by Sun and Zhang.Under some suitable assumptions,the global convergence andthe superlinear convergence of the new algorithm are established,respectively.And some preliminarynumerical experiments,which shows that the new Algorithm is feasible,is also reported. 展开更多
关键词 bfgs method complexity systems global convergence superlinear convergence unconstrained optimization.
原文传递
Global Convergence of a Modified Limited Memory BFGS Method for Non-convex Minimization
5
作者 Yun-hai XIAO Ting-feng Zeng-xin WEI 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2013年第3期555-566,共12页
In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence propert... In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses global convergence property without convexity assumption on the objective function. Under some suitable conditions, the global convergence of the proposed method is proved. Some numerical results are reported which illustrate that the proposed method is efficient. 展开更多
关键词 Non-convex minimization secant equation limited memory bfgs method global convergence
原文传递
Improved hybrid iterative optimization method for seismic full waveform inversion
6
作者 王义 董良国 刘玉柱 《Applied Geophysics》 SCIE CSCD 2013年第3期265-277,357,358,共15页
In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the He... In full waveform inversion (FWI), Hessian information of the misfit function is of vital importance for accelerating the convergence of the inversion; however, it usually is not feasible to directly calculate the Hessian matrix and its inverse. Although the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) or Hessian-free inexact Newton (HFN) methods are able to use approximate Hessian information, the information they collect is limited. The two methods can be interlaced because they are able to provide Hessian information for each other; however, the performance of the hybrid iterative method is dependent on the effective switch between the two methods. We have designed a new scheme to realize the dynamic switch between the two methods based on the decrease ratio (DR) of the misfit function (objective function), and we propose a modified hybrid iterative optimization method. In the new scheme, we compare the DR of the two methods for a given computational cost, and choose the method with a faster DR. Using these steps, the modified method always implements the most efficient method. The results of Marmousi and overthrust model testings indicate that the convergence with our modified method is significantly faster than that in the L-BFGS method with no loss of inversion quality. Moreover, our modified outperforms the enriched method by a little speedup of the convergence. It also exhibits better efficiency than the HFN method. 展开更多
关键词 Full waveform inversion Hessian information limited memory bfgs method Hessian-free inexact Newton method decrease ratio
下载PDF
BFGS quasi-Newton location algorithm using TDOAs and GROAs 被引量:6
7
作者 Benjian Hao Zan Li State 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第3期341-348,共8页
With the emergence of location-based applications in various fields, the higher accuracy of positioning is demanded. By utilizing the time differences of arrival (TDOAs) and gain ratios of arrival (GROAs), an effi... With the emergence of location-based applications in various fields, the higher accuracy of positioning is demanded. By utilizing the time differences of arrival (TDOAs) and gain ratios of arrival (GROAs), an efficient algorithm for estimating the position is proposed, which exploits the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method to solve nonlinear equations at the source location under the additive measurement error. Although the accuracy of two-step weighted-least-square (WLS) method based on TDOAs and GROAs is very high, this method has a high computational complexity. While the proposed approach can achieve the same accuracy and bias with the lower computational complexity when the signal-to-noise ratio (SNR) is high, especially it can achieve better accuracy and smaller bias at a lower SNR. The proposed algorithm can be applied to the actual environment due to its real-time property and good robust performance. Simulation results show that with a good initial guess to begin with, the proposed estimator converges to the true solution and achieves the Cramer-Rao lower bound (CRLB) accuracy for both near-field and far-field sources. 展开更多
关键词 time difference of arrival (TDOA) gain ratio of arrival (GROA) source location Broyden-Fletcher-Goldfarb-Shanno bfgs quasi-Newton method.
下载PDF
The Superlinear Convergence Analysis of a Nonmonotone BFGS Algorithm on Convex Objective Functions 被引量:15
8
作者 Gong Lin YUAN Zeng Xin WEI 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2008年第1期35-42,共8页
We prove the superlinear convergence of a nonmonotone BFGS algorithm on convex objective functions under suitable conditions.
关键词 bfgs method superlinear convergence nonmonotone linesearch
原文传递
CONVERGENCE PROPERTIES OF A MODIFIED BFGSALGORITHM FOR MINIMIZATION WITHARMIJO-GOLDSTEIN STEPLENGTHS 被引量:2
9
作者 Nai-yang Deng Zheng-feng Li(Department of Mathematics, Beijing Agricultural Engineering University, Beijing 100083,China) 《Journal of Computational Mathematics》 SCIE CSCD 1999年第6期645-652,共8页
The line search strategy is crucial for an efficient unconstrained optimization algorithm. One of the reason why the Wolfe line searches is recommended lies in that it ensures positive definiteness of BFGS updates. Wh... The line search strategy is crucial for an efficient unconstrained optimization algorithm. One of the reason why the Wolfe line searches is recommended lies in that it ensures positive definiteness of BFGS updates. When gradient information has to be obtained costly, the Armijo-Goldstein line searches may be preferred. To maintain positive difiniteness of BFGS updates based on the Armijo-Goldstein line searches, a slightly modified form of BFGS update is proposed by I.D. Coope and C.J. Price (Journal of Computational Mathematics, 13 (1995), 156-160), while its convergence properties is open up to now. This paper shows that the modified BFGS algorithm is globally and superlinearly convergent based on the Armijo-Goldstein line searches. 展开更多
关键词 bfgs methods CONVERGENCE superlinear convergence
原文传递
The Global Convergence of Self-Scaling BFGS Algorithm with Nonmonotone Line Search for Unconstrained Nonconvex Optimization Problems
10
作者 Hong Xia YIN Dong Lei DU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2007年第7期1233-1240,共8页
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessi... The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems. 展开更多
关键词 nonmonotone line search self-scaling bfgs method global convergence
原文传递
A PERTURBED QUASI-NEWTON ALGORITHM FOR BOUND-CONSTRAINED GLOBAL OPTIMIZATION
11
作者 Raouf Ziadi Abdelatif Bencherif-Madani 《Journal of Computational Mathematics》 2025年第1期143-173,共31页
This paper presents a stochastic modification of a limited memory BFGS method to solve bound-constrained global minimization problems with a differentiable cost function with no further smoothness. The approach is a s... This paper presents a stochastic modification of a limited memory BFGS method to solve bound-constrained global minimization problems with a differentiable cost function with no further smoothness. The approach is a stochastic descent method where the deterministic sequence, generated by a limited memory BFGS method, is replaced by a sequence of random variables. To enhance the performance of the proposed algorithm and make sure the perturbations lie within the feasible domain, we have developed a novel perturbation technique based on truncating a multivariate double exponential distribution to deal with bound-constrained problems;the theoretical study and the simulation of the developed truncated distribution are also presented. Theoretical results ensure that the proposed method converges almost surely to the global minimum. The performance of the algorithm is demonstrated through numerical experiments on some typical test functions as well as on some further engineering problems. The numerical comparisons with stochastic and meta-heuristic methods indicate that the suggested algorithm is promising. 展开更多
关键词 Global optimization Limited memory bfgs method Stochastic perturbation Truncated multivariate double exponential distribution
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部