In this paper,two-stage stochastic quadratic programming problems with equality constraints are considered.By Monte Carlo simulation-based approximations of the objective function and its first(second)derivative,an in...In this paper,two-stage stochastic quadratic programming problems with equality constraints are considered.By Monte Carlo simulation-based approximations of the objective function and its first(second)derivative,an inexact Lagrange-Newton type method is proposed.It is showed that this method is globally convergent with probability one.In particular,the convergence is local superlinear under an integral approximation error bound condition.Moreover,this method can be easily extended to solve stochastic quadratic programming problems with inequality constraints.展开更多
We propose a parallel stochastic Newton method (PSN) for minimizing unconstrained smooth convex functions. We analyze the method in the strongly convex case, and give conditions under which acceleration can be expec...We propose a parallel stochastic Newton method (PSN) for minimizing unconstrained smooth convex functions. We analyze the method in the strongly convex case, and give conditions under which acceleration can be expected when compared to its serial counterpart. We show how PSN can be applied to the large quadratic function minimization in general, and empirical risk minimization problems. We demonstrate the practical efficiency of the method through numerical experiments and models of simple matrix classes.展开更多
In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth c...In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth convex terms in the objective function.We assume that the gradient and Hessian information of the smooth part of the objective function can only be approximated and accessed via calling stochastic firstand second-order oracles.The approach combines stochastic semismooth Newton steps,stochastic proximal gradient steps and a globalization strategy based on growth conditions.We present tail bounds and matrix concentration inequalities for the stochastic oracles that can be utilized to control the approximation errors via appropriately adjusting or increasing the sampling rates.Under standard local assumptions,we prove that the proposed algorithm locally turns into a pure stochastic semismooth Newton method and converges r-linearly or r-superlinearly with high probability.展开更多
基金Partly supported by the National Natural Science Foundation of China( 1 0 1 71 0 5 5 )
文摘In this paper,two-stage stochastic quadratic programming problems with equality constraints are considered.By Monte Carlo simulation-based approximations of the objective function and its first(second)derivative,an inexact Lagrange-Newton type method is proposed.It is showed that this method is globally convergent with probability one.In particular,the convergence is local superlinear under an integral approximation error bound condition.Moreover,this method can be easily extended to solve stochastic quadratic programming problems with inequality constraints.
文摘We propose a parallel stochastic Newton method (PSN) for minimizing unconstrained smooth convex functions. We analyze the method in the strongly convex case, and give conditions under which acceleration can be expected when compared to its serial counterpart. We show how PSN can be applied to the large quadratic function minimization in general, and empirical risk minimization problems. We demonstrate the practical efficiency of the method through numerical experiments and models of simple matrix classes.
基金supported by the Fundamental Research Fund—Shenzhen Research Institute for Big Data Startup Fund(Grant No.JCYJ-AM20190601)the Shenzhen Institute of Artificial Intelligence and Robotics for Society+2 种基金National Natural Science Foundation of China(Grant Nos.11831002 and 11871135)the Key-Area Research and Development Program of Guangdong Province(Grant No.2019B121204008)Beijing Academy of Artificial Intelligence。
文摘In this work,we present probabilistic local convergence results for a stochastic semismooth Newton method for a class of stochastic composite optimization problems involving the sum of smooth nonconvex and nonsmooth convex terms in the objective function.We assume that the gradient and Hessian information of the smooth part of the objective function can only be approximated and accessed via calling stochastic firstand second-order oracles.The approach combines stochastic semismooth Newton steps,stochastic proximal gradient steps and a globalization strategy based on growth conditions.We present tail bounds and matrix concentration inequalities for the stochastic oracles that can be utilized to control the approximation errors via appropriately adjusting or increasing the sampling rates.Under standard local assumptions,we prove that the proposed algorithm locally turns into a pure stochastic semismooth Newton method and converges r-linearly or r-superlinearly with high probability.
基金Research the partial supports the National Science Foundation Grant of China(No.60603098)The Key Projects of Baoji University of Arts and Sciences (No.ZK0937)