摘要
提出一类新的求解无约束优化问题的记忆梯度法,在较弱条件下证明了算法具有全局收敛性和线性收敛速率。算法在每步迭代中利用当前和前面迭代点的信息产生下降方向,不需计算和存储矩阵,适于求解大规模优化问题。初步的数值试验表明算法比Wolfe搜索下的FR,PRP和HS共轭梯度法及最速下降法有效。
A new class of memory gradient methods for unconstraind optimization problems was presented. Its global convergence and linear convergence rate were proved under some mild conditions. The methods are suitable to solve large scale optimization problems by using the current and previous iterative information, which generate the descent direction and avoiding the computation and the storage of some matrices. Numerical experiments show that the new methods are more efficient then FR, PRP, HS conjugate gradient methods and the steepest descent method under the condition of the Wolfe line search.
出处
《山东大学学报(理学版)》
CAS
CSCD
北大核心
2009年第7期33-37,共5页
Journal of Shandong University(Natural Science)
基金
信阳师范学院青年基金资助项目(20080208
20070207)
关键词
无约束优化
记忆梯度法
全局收敛性
线性收敛速率
unconstrained optimization
memory gradient method
global convergence
linear convergence rate