期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A TRUST-REGION METHOD FOR NONSMOOTH NONCONVEX OPTIMIZATION
1
作者 Ziang Chen andre milzarek Zaiwen Wen 《Journal of Computational Mathematics》 SCIE CSCD 2023年第4期683-716,共34页
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a(probably nonconvex)smooth function and a(probably nonsmooth)convex functi... We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a(probably nonconvex)smooth function and a(probably nonsmooth)convex function.The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions.Therefore,the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods.When the accuracy of the model function at the solution of the subproblem is not sufficient,we add a safeguard on the stepsizes for improving the accuracy.For a class of functions that can be“truncated”,an additional truncation step is defined and a stepsize modification strategy is designed.The overall scheme converges globally and we establish fast local convergence under suitable assumptions.In particular,using a connection with a smooth Riemannian trust-region method,we prove local quadratic convergence for partly smooth functions under a strict complementary condition.Preliminary numerical results on a family of Ei-optimization problems are reported and demonstrate the eficiency of our approach. 展开更多
关键词 Trust-region method Nonsmooth composite programs Quadratic model function Global and local convergence
原文传递
On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
2
作者 andre milzarek Xiantao Xiao +1 位作者 Zaiwen Wen Michael Ulbrich 《Science China Mathematics》 SCIE CSCD 2022年第10期2151-2170,共20页
In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth c... In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth convex terms in the objective function.We assume that the gradient and Hessian information of the smooth part of the objective function can only be approximated and accessed via calling stochastic firstand second-order oracles.The approach combines stochastic semismooth Newton steps,stochastic proximal gradient steps and a globalization strategy based on growth conditions.We present tail bounds and matrix concentration inequalities for the stochastic oracles that can be utilized to control the approximation errors via appropriately adjusting or increasing the sampling rates.Under standard local assumptions,we prove that the proposed algorithm locally turns into a pure stochastic semismooth Newton method and converges r-linearly or r-superlinearly with high probability. 展开更多
关键词 nonsmooth stochastic optimization stochastic approximation semismooth Newton method stochastic second-order information local convergence
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部