期刊文献+

非凸共轭梯度p范数正则化SVM分类算法

Classifier ofp-norm Regularizing SVM with Nonconvex Conjugate Gradient Algorithm
下载PDF
导出
摘要 经典的p范数支持向量机分类算法的正则化阶次p往往被选定为0、1或2。但是通过大量的实验可知,p取0、1或2的分类效果并不一定是最佳的。针对不同的数据使用不同的正则化阶次,可以改进分类算法的预测准确率。刘建伟等从目前迭代再权的思想出发讨论了p范数正则化支持向量机问题,但由于每次求解的均是原问题的近似问题,因而得到的解是近似解。从最优化角度出发,应用非凸共轭梯度算法求解0<p<1时的p范数正则化支持向量机问题,分别对3种不同的支持向量机问题进行了求解,并通过处理3种典型的癌症数据集展示了算法的良好分类效果。 Classical classification algorithm of SVM via p norm regularization usually takes the regular-ization parameter p as 0,1 or 2. However,large amount of experiments show that these parameters can not always achieve the best classification results. It means finding out the appropriate parameter accord-ing to specific dataset may help promote the predictive rate. LIU Jian-wei has already discussed this problem. However,as it is based on the idea of reweighed iteration,it only gets the approximate solution of the original problem. The original problem was solved from the point of optimization when 0〈p〈1. Three different kinds of SVM have been discussed and the classification results are shown with the ex- periments on three gene datasets.
出处 《广西师范大学学报(自然科学版)》 CAS 北大核心 2013年第3期51-58,共8页 Journal of Guangxi Normal University:Natural Science Edition
基金 国家自然科学基金资助项目(21006127) 中国石油大学(北京)基础学科研究基金资助项目(JCXK-2011-07)
关键词 p范数 支持向量机 共轭梯度法 p-norm support vector machine conjugate gradient algorithm
  • 相关文献

参考文献15

  • 1BOSER B E,GUYON I M ,VAPNIK V N. A training algorithm for optimal margin classifiers [C]//Proceedings of the Fifth Annual Workshop on Computational Learning Theory. New York:ACM Press, 1992:144-152.
  • 2CORTES C ,VAPNIK V. Support-vector networks[J]. Machine Learning, 1995,20 : 273-297.
  • 3WESTON J,ELISSEEFF A,SCHOLKOPF B,et al. Use of the zero-norm with linear models and kernel methods[J]. Journal of Machine Learning Research, 2003,3 : 1439-1461.
  • 4ZHU Ji,HASTIE T,ROSSET S,et al. 1-Norm support vector machines[C]//Neural Information Processing Sys- tems. Cambridge, USA : MIT Press, 2004 : 16.
  • 5LIU Zhen-qiu,JIANG Feng,TIAN Guo-liang,et al. Sparse logistic regression with Lp penalty for biomarker identifi- cation[J]. Statistical Applications in Genetics and molecular Biology, 2007,6 (1) : 1-22.
  • 6NG A Y. Feature selection,L1 vs. L2 regularization,and rotational invariance[C]//Proc of 21st International Confer- enee on Machine Learning. New York : ACM Press, 2004 : 78.
  • 7LIU Yu-feng,WU Yi-chao. Variable selection via a combination of the L0 and L1 penalties[J]. Journal of Computa- tional and Graphical Statistics ,2007,16(4) :782-798.
  • 8LIU Yu-feng,ZHANG He-len,CHEOLWOO P,et al. Support vector machines with adaptive Lq penalties[J]. Compu- tational Statistics and Data Analysis ,2007,51 (12):6380-6394.
  • 9FLETCHER R, REEVES C M. Function minimization by convergent gradients [J]. The Computer Journal, 1964, 7 (2) : 149-154.
  • 10WOLFE P. Convergence conditions for ascent methods[J]. SIAM Review, 1969,11 (2) : 226-235.

二级参考文献2

共引文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部