In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified upd...In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.展开更多
In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scal...In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scaling factors are derived from minimizing the estimate of upper bounds on the condition number of the updating matrix. Theoretical analysis, and numerical experiments and comparisons show that introducing the optimal scaling factor into the modified symmetric rank one update preserves the positive definiteness of updates, and greatly improves the stability and numerical performance of the modified symmetric rank one algorithm.展开更多
Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constru...Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constructs a quadratic interpolation curve to approximate some ODE trajectory;it finally determines a new point (corrector) by searching along the quadratic curve. In particular, this paper gives a global convergence analysis for schemes associated with the quasi-Newton updates. In our computational experiments, the new schemes using DFP and BFGS updates outperformed their conventional counterparts on a set of standard test problems.展开更多
The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe ...The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).展开更多
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
文摘We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
基金This work is supported by National Natural Key product Foundations of China 10231060.
文摘In this paper, a switching method for unconstrained minimization is proposed. The method is based on the modified BFGS method and the modified SR1 method. The eigenvalues and condition numbers of both the modified updates are evaluated and used in the switching rule. When the condition number of the modified SR1 update is superior to the modified BFGS update, the step in the proposed quasi-Newton method is the modified SR1 step. Otherwise the step is the modified BFGS step. The efficiency of the proposed method is tested by numerical experiments on small, medium and large scale optimization. The numerical results are reported and analyzed to show the superiority of the proposed method.
文摘In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scaling factors are derived from minimizing the estimate of upper bounds on the condition number of the updating matrix. Theoretical analysis, and numerical experiments and comparisons show that introducing the optimal scaling factor into the modified symmetric rank one update preserves the positive definiteness of updates, and greatly improves the stability and numerical performance of the modified symmetric rank one algorithm.
文摘Using a predictor-corrector tactic, this paper derives new iteration schemes for unconstrained optimization. It yields a point (predictor) by some line search from the current point;then with the two points it constructs a quadratic interpolation curve to approximate some ODE trajectory;it finally determines a new point (corrector) by searching along the quadratic curve. In particular, this paper gives a global convergence analysis for schemes associated with the quasi-Newton updates. In our computational experiments, the new schemes using DFP and BFGS updates outperformed their conventional counterparts on a set of standard test problems.
文摘The convergence of quasi-Newton methods for unconstrained optimization has at-tracted much attention. Powell proved a global convergence result for the BFGS algorithmusing inexact linesearch which satisfies the Wolfe conditions. Byrd, Nocedal and Yuanextended this result to the convex Broyden class of quasi-Newton methods except the DFPmethod. However, the global convergence of the DFP method, the first quasi-Newtonmethod, using the same linesearch strategy, is still an open question (see ref. [2]).