摘要
基于定步长技术,本文给出一种求解无约束优化问题的超记忆梯度算法,从而避免每步都执行线搜索.在一定条件下证明该算法具有全局收敛性和局部线性收敛率.由于该方法不用计算和存储矩阵,故适合于求解大规模优化问题.数值试验表明该算法是有效的.
Based on a fixed step-length technique,this paper presents a new supermemory gradient method for unconstrained optimization problems, which avoids the line search at each iteration. Under some conditions,the global convergence and locally superlinear conver- gence rate are analyzed. The proposed method avoids the computation and storage of some matrices, and is suitable to solve large scale optimization problems. Preliminary numerical re- sults indicate that this algorithm is effective.
出处
《应用数学》
CSCD
北大核心
2015年第1期74-82,共9页
Mathematica Applicata
基金
国家自然科学基金(11261015)
海南省自然科学基金(111001)
关键词
无约束优化
定步长技术
超记忆梯度法
数值试验
Unconstrained optimization
Fixed step-length~ Supermemory gradient meth-od
Numerical experiment