期刊文献+

梯度下降法 被引量:41

Gradient Descent Method
下载PDF
导出
摘要 该文提出了一个无约束优化的新算法——梯度下降法。该法利用已得迭代点的信息,根据|g_(k+1)|≤|g_k|的要求,通过解非线性方程组g=g_(k+1)得下一个迭代点x_(k+1)。该法特点是:不需进行一维搜索;对正定二次函数具有二步迭代收敛性;对连续可微的凸函数保证收敛到全局极小点;其收敛域比牛顿法大;收敛速度比牛顿法慢些,但比著名的BFGS变尺度法和FR共轭梯度法快。 In this paper a new unconstrained optimization algo-rithm--the gradient descent method is suggested. First, the gradientg_(k+1) of next iterative point x_(k+1) is gotten using information of iterativepoints gained by requirement of |g_(k+1)|≤|g_k|. Then, the point x_(k+1) isfound by solving nonlinear equations g=g_(k+1). The gradient descentmethod is charaterized as follows: the method does not require linesearch, and has convergence in two iterations for positive quadraticfunctions, and ensures to get the global mininizing point for continu-ously differentiable convex function. Its convergence region is greaterthan that of Newton's method. Its convergence rate is slower than thatof Newton's method, but much faster than that of well-known BFGSmethod and FR-Conjugate gradient method.
出处 《华东工学院学报》 CSCD 1993年第2期12-16,22,共6页
关键词 最佳化 约束 梯度下降法 optimization method constraints gradients
  • 相关文献

参考文献1

  • 1[美]阿佛里耳(M·Avriel) 著,李元熹等.非线性规划[M]上海科学技术出版社,1979.

同被引文献305

引证文献41

二级引证文献108

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部