期刊文献+

On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization

原文传递
导出
摘要 In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth convex terms in the objective function.We assume that the gradient and Hessian information of the smooth part of the objective function can only be approximated and accessed via calling stochastic firstand second-order oracles.The approach combines stochastic semismooth Newton steps,stochastic proximal gradient steps and a globalization strategy based on growth conditions.We present tail bounds and matrix concentration inequalities for the stochastic oracles that can be utilized to control the approximation errors via appropriately adjusting or increasing the sampling rates.Under standard local assumptions,we prove that the proposed algorithm locally turns into a pure stochastic semismooth Newton method and converges r-linearly or r-superlinearly with high probability.
出处 《Science China Mathematics》 SCIE CSCD 2022年第10期2151-2170,共20页 中国科学:数学(英文版)
基金 supported by the Fundamental Research Fund—Shenzhen Research Institute for Big Data Startup Fund(Grant No.JCYJ-AM20190601) the Shenzhen Institute of Artificial Intelligence and Robotics for Society National Natural Science Foundation of China(Grant Nos.11831002 and 11871135) the Key-Area Research and Development Program of Guangdong Province(Grant No.2019B121204008) Beijing Academy of Artificial Intelligence。
  • 相关文献

参考文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部