期刊文献+

基于新的步长搜索下的记忆梯度法收敛性分析 被引量:1

Convergent analysis of the memory gradient method based on a new step-size search
下载PDF
导出
摘要 根据最速下降算法、拟牛顿法、FR共轭梯度法、PRP共轭梯度法等,求解大规模无约束优化问题的有效算法、精确线搜索与Wolfe线搜索等的搜索条件,着重对计算更为有效的适合求解无约束优化问题的记忆梯度算法进行研究。基于Wolfe非精确线搜索提出一种新的步长搜索方法,对记忆梯度算法进行改进。最后证明改进的算法在较弱的条件下是全局收敛的。 The unconstrained optimization problem is first studied in this paper. The steepest descent method, the Quasi Newton method, the FR conjugate gradient method, and the PRP conjugate gradient method, which are for the large scale unconstrained optimization problems, are then described with a focus on the discussion of the exact line search and the Wolfe line search. The memory gradient method, which is one of the efficient methods for solving unconstrained optimization problems is stressed. A new step-size search method based on the Wolfe line search is proposed and used to improve the memory gradient method algorithm. The improved algorithm is at last proved global convergent under weaker conditions.
出处 《桂林电子科技大学学报》 2007年第6期498-500,共3页 Journal of Guilin University of Electronic Technology
基金 国家自然科学基金项目(10501009) 广西自然科学基金项目(0728206) 中国博士后基金项目(20070410227)
关键词 无约束优化 记忆梯度法 全局收敛性 步长搜索 unconstrained optimization memory gradient method global convergence step-size search
  • 相关文献

参考文献5

二级参考文献4

共引文献24

同被引文献4

  • 1FLETCHER R. Practical method of optimization I: unconstrained optimization[M]. New York : Wiley, 1997.
  • 2HAGER W W, ZHANG H. A new conjugate gradient method with guaranteed descent and an efficient line search[J]. SIAM,2005,16(15):170-192.
  • 3ZHANG Li, ZHOU Wei-jun. Two descent hybrid conjugate gradient methods for optimization[J]. Comput. Appl. Math, 2008,216 (45) :251-264.
  • 4ZOUTENDIJK G. Nonlinear Programming Computational Methods[M]. North-Holland:Amsterdam,1970..37-86.

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部