Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In thi...Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.展开更多
The quasi-Newton equation has played a central role in the quasi-Newton methods for solving systems of nonlinear equations and/or unconstrained optimization problems. Insteady Pan suggested a new equation, and showed ...The quasi-Newton equation has played a central role in the quasi-Newton methods for solving systems of nonlinear equations and/or unconstrained optimization problems. Insteady Pan suggested a new equation, and showed that it is of the second order while the traditional of the first order, in certain approximation sense [12]. In this paper, we make a generalization of the two equations to include them as special cases. The generalized equation is analyzed, and new updates are derived from it. A DFP-like new update outperformed the traditional DFP update in computational experiments on a set of standard test problems.展开更多
In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scal...In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scaling factors are derived from minimizing the estimate of upper bounds on the condition number of the updating matrix. Theoretical analysis, and numerical experiments and comparisons show that introducing the optimal scaling factor into the modified symmetric rank one update preserves the positive definiteness of updates, and greatly improves the stability and numerical performance of the modified symmetric rank one algorithm.展开更多
A quasi-Newton method (QNM) for solving an unconstrained optimization problem in infinite dimensional spaces is presented in this paper. We apply the QNM algorithm to an identification problem for a nonlinear system o...A quasi-Newton method (QNM) for solving an unconstrained optimization problem in infinite dimensional spaces is presented in this paper. We apply the QNM algorithm to an identification problem for a nonlinear system of differential equations, that is, to identify the parameter vector q = q(t) appearing in the following system of differential equations, based on the measurement of the state , where is a measurement operator. We give two examples to show the algorithm.展开更多
A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimen...A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.展开更多
This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the direct...This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the directions and the iterations generated by the methods of the new class depend only on the parameter p if the exact line searches are made in each steps.展开更多
In this paper, a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization is proposed. By employing the generalized weak quasi-Newton equations, we derive several schemes ...In this paper, a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization is proposed. By employing the generalized weak quasi-Newton equations, we derive several schemes to construct variants of scalar matrices as the Hessian approximation used in the trust region subproblem. Under some reasonable conditions, global convergence of the proposed algorithm is established in the trust region framework. The numerical experiments on solving the test problems with dimensions from 50 to 20,000 in the CUTEr library are reported to show efficiency of the algorithm.展开更多
基金This research is supported by the Research and Development Foundation of Shanghai Education Commission and Asia-Pacific Operatio
文摘Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.
基金Project 10371017 supported by National Natural Science Foundation of China.
文摘The quasi-Newton equation has played a central role in the quasi-Newton methods for solving systems of nonlinear equations and/or unconstrained optimization problems. Insteady Pan suggested a new equation, and showed that it is of the second order while the traditional of the first order, in certain approximation sense [12]. In this paper, we make a generalization of the two equations to include them as special cases. The generalized equation is analyzed, and new updates are derived from it. A DFP-like new update outperformed the traditional DFP update in computational experiments on a set of standard test problems.
文摘In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scaling factors are derived from minimizing the estimate of upper bounds on the condition number of the updating matrix. Theoretical analysis, and numerical experiments and comparisons show that introducing the optimal scaling factor into the modified symmetric rank one update preserves the positive definiteness of updates, and greatly improves the stability and numerical performance of the modified symmetric rank one algorithm.
基金This research is partially supported by the National Natural Science Foundation of China(No. 69774012).
文摘A quasi-Newton method (QNM) for solving an unconstrained optimization problem in infinite dimensional spaces is presented in this paper. We apply the QNM algorithm to an identification problem for a nonlinear system of differential equations, that is, to identify the parameter vector q = q(t) appearing in the following system of differential equations, based on the measurement of the state , where is a measurement operator. We give two examples to show the algorithm.
文摘A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.
文摘This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the directions and the iterations generated by the methods of the new class depend only on the parameter p if the exact line searches are made in each steps.
基金supported by National Natural Science Foundation of China (Grant Nos. 11571178, 11401308, 11371197 and 11471145)the National Science Foundation of USA (Grant No. 1522654)a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions
文摘In this paper, a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization is proposed. By employing the generalized weak quasi-Newton equations, we derive several schemes to construct variants of scalar matrices as the Hessian approximation used in the trust region subproblem. Under some reasonable conditions, global convergence of the proposed algorithm is established in the trust region framework. The numerical experiments on solving the test problems with dimensions from 50 to 20,000 in the CUTEr library are reported to show efficiency of the algorithm.