摘要
根据最速下降算法、拟牛顿法、FR共轭梯度法、PRP共轭梯度法等,求解大规模无约束优化问题的有效算法、精确线搜索与Wolfe线搜索等的搜索条件,着重对计算更为有效的适合求解无约束优化问题的记忆梯度算法进行研究。基于Wolfe非精确线搜索提出一种新的步长搜索方法,对记忆梯度算法进行改进。最后证明改进的算法在较弱的条件下是全局收敛的。
The unconstrained optimization problem is first studied in this paper. The steepest descent method, the Quasi Newton method, the FR conjugate gradient method, and the PRP conjugate gradient method, which are for the large scale unconstrained optimization problems, are then described with a focus on the discussion of the exact line search and the Wolfe line search. The memory gradient method, which is one of the efficient methods for solving unconstrained optimization problems is stressed. A new step-size search method based on the Wolfe line search is proposed and used to improve the memory gradient method algorithm. The improved algorithm is at last proved global convergent under weaker conditions.
出处
《桂林电子科技大学学报》
2007年第6期498-500,共3页
Journal of Guilin University of Electronic Technology
基金
国家自然科学基金项目(10501009)
广西自然科学基金项目(0728206)
中国博士后基金项目(20070410227)
关键词
无约束优化
记忆梯度法
全局收敛性
步长搜索
unconstrained optimization
memory gradient method
global convergence
step-size search