摘要
本文给出一个修正的非单调线搜索策略,并结合该策略提出一个求解无约束优化问题的超记忆梯度算法.该算法的主要特点是:在每一次迭代中,它所产生的搜索方向总是满足充分下降条件.这一特性不依赖于目标函数的凸性以及方法所采用的线搜索策略.在较弱的条件下,该方法具有全局收敛和局部R-线性收敛性.数值实验表明了该方法的有效性.
In this paper,we present a modified nonmonotone strategy.Based on this strategy,a supermemory gradient method for unconstrained problems is proposed.An attractive property of the proposed method is that the search direction always provides sufficient descent step at each iteration.The property is independent of convexity of objective function and the line search used.Under mild assumptions,the global convergence and R-linear convergence properties of the proposed algorithm are established respectively.Numerical results are also reported to show that the proposed method is effective.
作者
林海婵
李靖雅
欧宜贵
LIN Haichan;LI Jingya;OU Yigui(Faculty of Science,Hainan University,Haikou 570228,China)
出处
《应用数学》
CSCD
北大核心
2020年第1期116-125,共10页
Mathematica Applicata
基金
国家自然科学基金项目(11961018,11761025)
关键词
无约束优化
非单调技术
超记忆梯度法
收敛性分析
数值实验
Unconstrained optimization
Nonmonotone technique
Supermemory gradient method
Convergence analysis
Numerical experiment