摘要
在本文中,首先我们提出一个记忆梯度算法,并讨论其在Wolfe线搜索下的下降性和全局收敛性.进一步地,我们将此算法推广到更一般的情形.最后,我们对这类记忆梯度方法的数值表现进行测试,并与PRP, FR, HS, LS, DY和CD共轭梯度法进行比较,数值结果表明这类算法是有效的.
In this paper, we first propose a memory gradient algorithm, and discuss its descent property and global convergence under the Wolfe line search. Further, we extend the algorithm to a more general form. Finally we test the numerical performance and compare the numerical results of this class of algorithms with the PRP, FR, HS, LS, DY and CD conjugate gradient methods, which indicate the class of algorithms proposed in this paper are effective.
作者
陈翠玲
韩彩虹
罗荔龄
陈玉
CHEN Cuiling;HAN Caihong;LUO Liling;CHEN Yu(College of Mathematics and Statistics,Guangzi Normal University,Guilin 541004,China;School of Computing and Information,University of Pittsburgh,Pittsburgh 15238,USA)
出处
《应用数学》
CSCD
北大核心
2018年第4期884-889,共6页
Mathematica Applicata
基金
Supported by the National Natural Science Foundation of China(11761014)
Guangxi Natural Science Foundation(2017GXNSFAA198243)
Guangxi Basic Ability Improvement Project for the Middle-aged,Young Teachers of Colleges and Universities(2017KY0068,KY2016YB069)
Guangxi Higher Education Undergraduate Course Teaching Reform Project(2017JGB147)