We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a(probably nonconvex)smooth function and a(probably nonsmooth)convex functi...We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a(probably nonconvex)smooth function and a(probably nonsmooth)convex function.The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions.Therefore,the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods.When the accuracy of the model function at the solution of the subproblem is not sufficient,we add a safeguard on the stepsizes for improving the accuracy.For a class of functions that can be“truncated”,an additional truncation step is defined and a stepsize modification strategy is designed.The overall scheme converges globally and we establish fast local convergence under suitable assumptions.In particular,using a connection with a smooth Riemannian trust-region method,we prove local quadratic convergence for partly smooth functions under a strict complementary condition.Preliminary numerical results on a family of Ei-optimization problems are reported and demonstrate the eficiency of our approach.展开更多
基金partly supported by the Fundamental Research Fund-Shenzhen Research Institute for Big Data(SRIBD)Startup Fund JCYJ-AM20190601partly supported by the NSFC grant 11831002the Beijing Academy of Artificial Intelligence.
文摘We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a(probably nonconvex)smooth function and a(probably nonsmooth)convex function.The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions.Therefore,the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods.When the accuracy of the model function at the solution of the subproblem is not sufficient,we add a safeguard on the stepsizes for improving the accuracy.For a class of functions that can be“truncated”,an additional truncation step is defined and a stepsize modification strategy is designed.The overall scheme converges globally and we establish fast local convergence under suitable assumptions.In particular,using a connection with a smooth Riemannian trust-region method,we prove local quadratic convergence for partly smooth functions under a strict complementary condition.Preliminary numerical results on a family of Ei-optimization problems are reported and demonstrate the eficiency of our approach.