We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have...We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.展开更多
Support vector machines(SVMs)have been recognized as a powerful tool to perform linear classification.When combined with the sparsity-inducing nonconvex penalty,SVMs can perform classification and variable selection s...Support vector machines(SVMs)have been recognized as a powerful tool to perform linear classification.When combined with the sparsity-inducing nonconvex penalty,SVMs can perform classification and variable selection simultaneously.However,the nonconvex penalized SVMs in general cannot be solved globally and efficiently due to their nondifferentiability,nonconvexity,and nonsmoothness.Existing solutions to the nonconvex penalized SVMs typically solve this problem in a serial fashion,which are unable to fully use the parallel computing power of modern multi-core machines.On the other hand,the fact that many real-world data are stored in a distributed manner urgently calls for a parallel and distributed solution to the nonconvex penalized SVMs.To circumvent this challenge,we propose an efficient alternating direction method of multipliers(ADMM)based algorithm that solves the nonconvex penalized SVMs in a parallel and distributed way.We design many useful techniques to decrease the computation and synchronization cost of the proposed parallel algorithm.The time complexity analysis demonstrates the low time complexity of the proposed parallel algorithm.Moreover,the convergence of the parallel algorithm is guaranteed.Experimental evaluations on four LIBSVM benchmark datasets demonstrate the efficiency of the proposed parallel algorithm.展开更多
基金supported by the National Natural Science Foundation of China(Nos.61303264,61202482,and 61202488)Guangxi Cooperative Innovation Center of Cloud Computing and Big Data(No.YD16505)Distinguished Young Scientist Promotion of National University of Defense Technology
文摘We consider a wide range of non-convex regularized minimization problems, where the non-convex regularization term is composite with a linear function engaged in sparse learning. Recent theoretical investigations have demonstrated their superiority over their convex counterparts. The computational challenge lies in the fact that the proximal mapping associated with non-convex regularization is not easily obtained due to the imposed linear composition. Fortunately, the problem structure allows one to introduce an auxiliary variable and reformulate it as an optimization problem with linear constraints, which can be solved using the Linearized Alternating Direction Method of Multipliers (LADMM). Despite the success of LADMM in practice, it remains unknown whether LADMM is convergent in solving such non-convex compositely regularized optimizations. In this research, we first present a detailed convergence analysis of the LADMM algorithm for solving a non-convex compositely regularized optimization problem with a large class of non-convex penalties. Furthermore, we propose an Adaptive LADMM (AdaLADMM) algorithm with a line-search criterion. Experimental results on different genres of datasets validate the efficacy of the proposed algorithm.
基金Project supported by the Major State Research Development Program,China(No.2016YFB0201305)。
文摘Support vector machines(SVMs)have been recognized as a powerful tool to perform linear classification.When combined with the sparsity-inducing nonconvex penalty,SVMs can perform classification and variable selection simultaneously.However,the nonconvex penalized SVMs in general cannot be solved globally and efficiently due to their nondifferentiability,nonconvexity,and nonsmoothness.Existing solutions to the nonconvex penalized SVMs typically solve this problem in a serial fashion,which are unable to fully use the parallel computing power of modern multi-core machines.On the other hand,the fact that many real-world data are stored in a distributed manner urgently calls for a parallel and distributed solution to the nonconvex penalized SVMs.To circumvent this challenge,we propose an efficient alternating direction method of multipliers(ADMM)based algorithm that solves the nonconvex penalized SVMs in a parallel and distributed way.We design many useful techniques to decrease the computation and synchronization cost of the proposed parallel algorithm.The time complexity analysis demonstrates the low time complexity of the proposed parallel algorithm.Moreover,the convergence of the parallel algorithm is guaranteed.Experimental evaluations on four LIBSVM benchmark datasets demonstrate the efficiency of the proposed parallel algorithm.