摘要
提出了一类新的求解无约束优化问题的记忆梯度法,在较弱条件下证明了该方法的全局收敛性和线性收敛速率.该算法无需任何线搜索而具有充分下降性,且搜索方向自适应在一个信赖域范围之内;该方法继承了著名PRP方法的一个主要性质:当步长很小时,搜索方向靠近于最速下降方向,避免了连续小步长的产生.初步的数值实验结果表明该方法是有效的.
A new class of memory gradient methods for unconstrained optimization problems are presented.Its global convergence and linear convergence rate are proved under some mild conditions.The method possesses the sufficient descent property without any line search,and the search direction will be in a trust region automatically.Moreover this method inherits an important property of the well-known Polak-Ribieie-Polyak(PRP)method: the tendency to turn towards the steepest descent direction if a small step is generated,preventing a sequence of tiny steps from happening.Preliminary numerical result shows that the method is efficient.
出处
《西北师范大学学报(自然科学版)》
CAS
北大核心
2010年第4期32-36,共5页
Journal of Northwest Normal University(Natural Science)
基金
国家自然科学基金资助项目(10761001)
广西大学科研基金资助项目(XGL090035)
关键词
无约束优化
记忆梯度法
全局收敛性
线性收敛速率
unconstrained optimization
memory gradient method
global convergence
linear convergence rate