摘要
A new method for unconstrained optimization problems is presented. It belongs to the class of trust region method, in which the descent direction is sought by using the trust region steps within the restricted subspace. Because this subspace can be specified to include information about previous steps, the method is also related to a supermemory descent method without performing multiple dimensional searches. Trust region methods have attractive global convergence property. Supermemory information has good scale independence property. Since the method possesses the characteristics of both the trust region methods and the supermemory descent methods, it is endowed with rapidly convergence. Numerical tests illustrate this point.
A new method for unconstrained optimization problems is presented. It belongs to the class of trust region method, in which the descent direction is sought by using the trust region steps within the restricted subspace. Because this subspace can be specified to include information about previous steps, the method is also related to a supermemory descent method without performing multiple dimensional searches. Trust region methods have attractive global convergence property. Supermemory information has good scale independence property. Since the method possesses the characteristics of both the trust region methods and the supermemory descent methods, it is endowed with rapidly convergence. Numerical tests illustrate this point.