摘要
该文提出了一个无约束优化的新算法——梯度下降法。该法利用已得迭代点的信息,根据|g_(k+1)|≤|g_k|的要求,通过解非线性方程组g=g_(k+1)得下一个迭代点x_(k+1)。该法特点是:不需进行一维搜索;对正定二次函数具有二步迭代收敛性;对连续可微的凸函数保证收敛到全局极小点;其收敛域比牛顿法大;收敛速度比牛顿法慢些,但比著名的BFGS变尺度法和FR共轭梯度法快。
In this paper a new unconstrained optimization algo-rithm--the gradient descent method is suggested. First, the gradientg_(k+1) of next iterative point x_(k+1) is gotten using information of iterativepoints gained by requirement of |g_(k+1)|≤|g_k|. Then, the point x_(k+1) isfound by solving nonlinear equations g=g_(k+1). The gradient descentmethod is charaterized as follows: the method does not require linesearch, and has convergence in two iterations for positive quadraticfunctions, and ensures to get the global mininizing point for continu-ously differentiable convex function. Its convergence region is greaterthan that of Newton's method. Its convergence rate is slower than thatof Newton's method, but much faster than that of well-known BFGSmethod and FR-Conjugate gradient method.
关键词
最佳化
约束
梯度下降法
optimization method
constraints
gradients