摘要
记忆梯度算法能求解大规模无约束优化问题,还具有避免大量存储和进行大规模矩阵运算的特点.在利用传统的记忆梯度算法时,最根本的问题是要解决迭代过程中所遇到的二维搜索问题.为了避免进行二维搜索,加快迭代收敛速度,对记忆梯度算法进行了改进,给出了一种改进的记忆梯度算法.改进的记忆梯度算法能有效地求解二维搜索问题,且计算量小,存储量亦小,从而使记忆梯度算法在非精确线性搜索的Wolfe原则下,有更好的实际意义.同时也对其全局收敛性进行了证明.
The memory gradient method can solve the large-scale unconstrained optimization problems, and also has the characteristics of avoiding massively saving and carrying on the large-scale matrix operations. Using traditional memory gradient method, the most basic problems are to solve the two-di- mensional search problems in each iterative. In order to avoid carrying on the two-dimensional search, the iteration is sped up, the improvement on the memory gradient method has been made, and one improved memory gradient method is given. It can effectively solve two-dimensional search questions, with fewer calculations and reserves. So the memory gradient method under the Wolfe principle has the better actual computational significance. At the same time. the property of global convergence is improved.
出处
《河北北方学院学报(自然科学版)》
2009年第3期4-5,10,共3页
Journal of Hebei North University:Natural Science Edition
关键词
无约束优化
记忆梯度算法
全局收敛性
unconstrained optimization
memory gradient method
global convergence