Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,diseas...Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.展开更多
This paper gives new bounds for restricted isometry constant(RIC)in compressed sensing.LetΦbe an m×n real matrix and k be a positive integer with k≤n.The main results of this paper show that if the restricted i...This paper gives new bounds for restricted isometry constant(RIC)in compressed sensing.LetΦbe an m×n real matrix and k be a positive integer with k≤n.The main results of this paper show that if the restricted isometry constant ofΦsat-isfiesδ8ak<1 andδk+ak<3/2−1+√(4a+3)^(2)−8/8aforα>3/8,then k-sparse solution can be recovered exactly via l1 minimization in the noiseless case.In particular,whenα=1,1.5,2 and3,we haveδ2k<0.5746 andδ8k<1,orδ2.5k<0.7046 andδ12k<1,orδ3k<0.7731 andδ16k<1 orδ4k<0.8445 andδ24k<1.展开更多
In this paper, we give some convergence results on the gradient projection method with exact stepsize rule for solving the minimization problem with convex constraints. Especially, we show that if the objective functi...In this paper, we give some convergence results on the gradient projection method with exact stepsize rule for solving the minimization problem with convex constraints. Especially, we show that if the objective function is convex and its gradient is Lipschitz continuous, then the whole sequence of iterations produced by this method with bounded exact stepsizes converges to a solution of the concerned problem.展开更多
The sparse linear programming(SLP) is a linear programming problem equipped with a sparsity constraint, which is nonconvex, discontinuous and generally NP-hard due to the combinatorial property involved.In this paper,...The sparse linear programming(SLP) is a linear programming problem equipped with a sparsity constraint, which is nonconvex, discontinuous and generally NP-hard due to the combinatorial property involved.In this paper, by rewriting the sparsity constraint into a disjunctive form, we present an explicit formula of the Lagrangian dual problem for the SLP, in terms of an unconstrained piecewise-linear convex programming problem which admits a strong duality under bi-dual sparsity consistency. Furthermore, we show a saddle point theorem based on the strong duality and analyze two classes of stationary points for the saddle point problem. At last,we extend these results to SLP with the lower bound zero replaced by a certain negative constant.展开更多
Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identific...Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identification and machine learning. In this paper, the non-Lipschitz ?_p(0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover,some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation(p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.展开更多
文摘Support vector machines(SVMs)are a kind of important machine learning methods generated by the cross interaction of statistical theory and optimization,and have been extensively applied into text categorization,disease diagnosis,face detection and so on.The loss function is the core research content of SVM,and its variational properties play an important role in the analysis of optimality conditions,the design of optimization algorithms,the representation of support vectors and the research of dual problems.This paper summarizes and analyzes the 0-1 loss function and its eighteen popular surrogate loss functions in SVM,and gives three variational properties of these loss functions:subdifferential,proximal operator and Fenchel conjugate,where the nine proximal operators and fifteen Fenchel conjugates are given by this paper.
基金This work was partially supported by the National Basic Research Program of China(No.2010CB732501)the National Natural Science Foundation of China(No.11171018)+1 种基金d the Fundamental Research Funds for the Central Universities(No.2013JBM095)We thank the two anonymous referees for their very useful comments.
文摘This paper gives new bounds for restricted isometry constant(RIC)in compressed sensing.LetΦbe an m×n real matrix and k be a positive integer with k≤n.The main results of this paper show that if the restricted isometry constant ofΦsat-isfiesδ8ak<1 andδk+ak<3/2−1+√(4a+3)^(2)−8/8aforα>3/8,then k-sparse solution can be recovered exactly via l1 minimization in the noiseless case.In particular,whenα=1,1.5,2 and3,we haveδ2k<0.5746 andδ8k<1,orδ2.5k<0.7046 andδ12k<1,orδ3k<0.7731 andδ16k<1 orδ4k<0.8445 andδ24k<1.
基金The research was in part supported by the National Natural Science Foundation of China (70471002,10571106) NCET040098.
文摘In this paper, we give some convergence results on the gradient projection method with exact stepsize rule for solving the minimization problem with convex constraints. Especially, we show that if the objective function is convex and its gradient is Lipschitz continuous, then the whole sequence of iterations produced by this method with bounded exact stepsizes converges to a solution of the concerned problem.
基金supported by National Natural Science Foundation of China(Grant Nos.11431002,11771038 and 11728101)the State Key Laboratory of Rail Traffic Control and Safety,Beijing Jiaotong University(Grant No.RCS2017ZJ001)China Scholarship Council(Grant No.201707090019)
文摘The sparse linear programming(SLP) is a linear programming problem equipped with a sparsity constraint, which is nonconvex, discontinuous and generally NP-hard due to the combinatorial property involved.In this paper, by rewriting the sparsity constraint into a disjunctive form, we present an explicit formula of the Lagrangian dual problem for the SLP, in terms of an unconstrained piecewise-linear convex programming problem which admits a strong duality under bi-dual sparsity consistency. Furthermore, we show a saddle point theorem based on the strong duality and analyze two classes of stationary points for the saddle point problem. At last,we extend these results to SLP with the lower bound zero replaced by a certain negative constant.
基金supported by National Natural Science Foundation of China(Grant Nos.11401124 and 71271021)the Scientific Research Projects for the Introduced Talents of Guizhou University(Grant No.201343)the Key Program of National Natural Science Foundation of China(Grant No.11431002)
文摘Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identification and machine learning. In this paper, the non-Lipschitz ?_p(0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover,some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation(p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.