摘要
Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.
Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.
基金
This research is supported by the Research and Development Foundation of Shanghai Education Commission and Asia-Pacific Operatio