期刊文献+

L1+L2正则化逻辑斯蒂模型分类算法 被引量:4

Logistic Model Classification Algorithm via L1+L2 Regularization
下载PDF
导出
摘要 提出一种L1+L2范数正则化逻辑斯蒂模型分类算法。该算法引入L2范数正则化,解决L1正则化逻辑斯蒂算法迭代过程奇异问题,通过引入样本向量的扩展和新的权值向量完成L1范数非平滑问题,最终使用共轭梯度方法求解经过转化的最优化问题。在各种实际数据集上的实验结果表明,该算法优于L2范数、L1范数和Lp范数正则化逻辑斯蒂模型,具有较好的特征选择和分类性能。 This paper proposes an L1+L2 norm regularized logistic model classification algorithm, and the singularity of iterative process in L1 norm regularized logistic classification algorithm is solved by using L2 norm regularization. The non-smooth problem is transformed into smooth one via argumentation of vector of samples and introduction of new weight vector, and classification object function is solved using the conjugate gradient method. Performance of classification and feature selection on real datasets shows that the algorithm is better than L2 norm, L1 nrom and Lp norm regularized logistic model.
出处 《计算机工程》 CAS CSCD 2012年第13期148-151,共4页 Computer Engineering
基金 中国石油大学(北京)基础学科研究基金资助项目
关键词 L1范数 L2范数 共轭梯度 特征选择 正则化 逻辑斯蒂模型 L1 norm L2 norm conjugate gradient feature selection regularization Logistic model
  • 相关文献

参考文献8

  • 1Zhu J,Hastie T.Classification of Gene Microarrays by PenalizedLogistic Regression[J].Biostatistics,2004,5(3):427-443.
  • 2孔康,汪群山,梁万路.L1正则化机器学习问题求解分析[J].计算机工程,2011,37(17):175-177. 被引量:12
  • 3Ng A.On Feature Selection:Learning with Exponentially ManyIrrelevant Features as Training Examples[C]//Proc.ofInternational Conference on Machine Learning.San Francisco,USA:Morgan Kaufmann Publishers Inc.,1998.
  • 4Ng A.Feature Selection,L1 vs L2 Regularization,and RotationalInvariance[C]//Proc.of International Conference on MachineLearning.New York,USA:ACM Press,2004.
  • 5Liu Zhenqiu,Jiang Feng,Tian Guoliang,et al.Sparse LogisticRegression with Lp Penalty for Biomarker Identification[J].Statistical Applications in Genetics and Molecular Biology,2007,6(1):2-12.
  • 6Tibshirani R.Regression Shrinkage and Selection via the Lasso[J].Journal of the Royal Statistical Society,1996,58(1):267-288.
  • 7袁亚湘 孙文瑜.最优化理论和方法[M].北京:科学出版社,1999..
  • 8Koh K,Kim S,Boyd S.An Interior-point Method for Large-scalel1-regularized Logistic Regression[J].Journal of MachineLearning Research,2007,1(8):1519-1555.

二级参考文献8

  • 1Yuan Guoxun, Chang Kaiwei, Hsieh C J, et al. A Comparison of Optimization Methods for Large-scale L1-regularized Linear Classification[EB/OL]. (2009-11-04). http://www.csie.ntu.edu.tw/ ~cjlin/papers.html.
  • 2Zhang Tong. Solving Large Scale Linear Prediction Problems Using Stochastic Gradient Descent Algorithms[C]//Proc. of the 21st International Conference on Machine Learning. [S. l.]: ACM Press, 2004: 919-936.
  • 3Duchi J, Shalev-Shwartz S, Singer Y. Efficient Projections onto the L1-ball for Learning in High Dimensions[C]//Proc. of the 25th International Conference on Machine Learning. [S. l.]: ACM Press, 2008: 272-279.
  • 4Langford J, Li Lihong, Zhang Tong. Sparse Online Learning via Truncated Gradient[EB/OL]. (2009-01-12). http://portal.acm.org/ citation.cfm?id=1577097.
  • 5Duchi J, Singer Y. Efficient Online and Batch Learning Using Forward Backward Splitting[EB/OL]. (2009-12-10). http://jmlr. csail.mit.edu/papers/v10/duchi09a.html.
  • 6Shalev-Shwartz S, Tewari A. Stochastic Methods for L1 Regula- rized Loss Minimization[C]//Proc. of the 26th International Confe- rence on Machine Learning. [S. l.]: ACM Press, 2009: 929-936.
  • 7Xiao Lin. Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization[EB/OL]. (2010-10-11). http:// jmlr.csail.mit.edu/papers/v11/xiao10a.html.
  • 8黄诗华,陈一民,陆意骏,陈明,姚争为.基于机器学习的自然特征匹配方法[J].计算机工程,2010,36(20):182-184. 被引量:7

共引文献13

同被引文献21

引证文献4

二级引证文献16

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部