期刊文献+

基于Boosting框架的非稀疏多核学习方法 被引量:2

Non-sparse multiple kernel learning method based on Boosting framework
下载PDF
导出
摘要 针对传统的分类器集成的每次迭代通常是将单个最优个体分类器集成到强分类器中,而其他可能有辅助作用的个体分类器被简单抛弃的问题,提出了一种基于Boosting框架的非稀疏多核学习方法 MKL-Boost。利用分类器集成学习的思想,每次迭代时,首先从训练集中选取一个训练子集,然后利用正则化非稀疏多核学习方法训练最优个体分类器,求得的个体分类器考虑了M个基本核的最优非稀疏线性凸组合,通过对核组合系数施加LP范数约束,一些好的核得以保留,从而保留了更多的有用特征信息,差的核将会被去掉,保证了有选择性的核融合,再将基于核组合的最优个体分类器集成到强分类器中。提出的算法既具有Boosting集成学习的优点,同时具有正则化非稀疏多核学习的优点,实验表明,相对于其他Boosting算法,MKL-Boost可以在较少的迭代次数内获得较高的分类精度。 Focus on the problem that the traditional classifier ensemble learning methods always integrated a single optimal classifier into the strong one, and the others, which maybe be useful to the optimal, were discarded simply in every Boosting iteration. This paper proposed a non-sparse multiple kernel learning method based on Boosting framework. At every iteration, firstly, this method selected a subset from the training dataset, then trained an optimal individual classifier by regularized non- sparse muhiple kernel learning method with this subset, which was obtained by optimizing the non-sparse combination of M basic kernels. It retained some good kernels and discarded the bad ones through imposing Lp-norm constrain on combination coefficients, and leaded to a selective kernel fusion and reserved more useful feature information. Lastly, these individual clas- sifiers were combined into the strong one. The proposed method has the advantages of ensemble learning methods as well as that of regularized non-sparse multiple kernel learning methods. Experiments show that it gains higher classification accuracy with smaller number of iterations compared with other Boosting methods.
出处 《计算机应用研究》 CSCD 北大核心 2016年第11期3219-3222,3227,共5页 Application Research of Computers
基金 国家自然科学基金资助项目(11301106) 广西自然科学基金资助项目(2014GXNSFAA1183105 2016GXNSFAA380226) 广西高校科研项目(ZD2014147 YB2014431)
关键词 集成学习 非稀疏多核学习 弱分类器 基本核 ensemble learning non-sparse multiple kernel learning weak classifier basic kernel
  • 相关文献

参考文献16

  • 1Bennett K P, Momma M, Embrechts M J. MARK: a Boosting algo- rithm for heterogeneous kernel models [ C ]//Proc of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York:ACM Press,2002:24-31.
  • 2曹莹,苗启广,刘家辰,高琳.AdaBoost算法研究进展与展望[J].自动化学报,2013,39(6):745-758. 被引量:267
  • 3Schapire R E. The strength of weak learnability[ J]. Machine Lear- ning,1990,5(2) : 197-227.
  • 4Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an application to Boosting[ J ]. Journal of Computer and System Sciences, 1997,55 ( 1 ) : 119-139.
  • 5Freund Y, Schapire R E. Experiments with a new Boosting algorithm [ C]//Proc of the 13th Conference on Machine Learning. San Fran- cisco : Morgan Kanfmann, 1996:48-156.
  • 6Margineantu D D, Dietterich T G. Pruning adaptive Boosting [ C ]// Proc of the 4th Intemational Conference on Machine Learning. San Francisco : Morgan Kaufmann, 1997:211-218.
  • 7Gao Xinbo, Zhong Juanjuan, Li Jie, etal. Face sketch synthesis al- gorithm based on E-HMM and selective ensemble[J]. IEEE Trans on Circuits and Systems for Video Technology, 2008,18 (4) : 487-496.
  • 8Schapire R E, Singer Y. BoosTexter: a Boosting-based system for text categorization[ J]. Machine Learning ,2000,39 ( 3 ) : 135-168.
  • 9Tao Dacheng, Tang Xiaoou, Li Xuelong, et al. Asymmetric bagging and random subspace for support vector machines-based relevancefeedback in image retrieval [ J]. IEEE Trans on Pattern Analysis and Machine Intelligence,2006,28 (7) : 1088-1099.
  • 10Sharkey A J C, Sharkey N E, Cross S S. Adapting an ensemble ap- proach for the diagnosis of breast cancer[ C ]//Proc of Conference on Artificial Neural Networks. 1998:281-286.

二级参考文献38

  • 1Chapelle O, Scholkopf B, Zein A. Semi-Supervised Learning. London: MIT Press, 2006.
  • 2Joachims T. Transductive inference for text classification using support vector machines. In: Bratko I, Dzeroski S, eds. Proc. of the 16th Int’l Conf. on Machine Learning (ICML’99). Morgan Kaufmann Publishers, 1999. 200-209. http://www.informatik.uni-trier. de/~ley/db/conf/icml/icml 1999.html.
  • 3Chapelle O, Sindhwani V, Keerthi SS. Optimization techniques for semi-supervised support vector machines. Journal of Machine Learning Research, 2008,9:203-233.
  • 4Chapelle O, Zent A. Semi-Supervised classification by low density separation. In: Cowell R, Ghahramani Z, eds. Proc. of the 10th Int’l Workshop on Artificial Intelligence and Statistics. Society for Artificial Intelligence and Statistics, 2005. 57-64. http://eprints.pascal-network.org/archive/00001144/01 /aistats2005 .pdf.
  • 5Collobert R, Sinz F, Weston J, Bottou L. Large scale transductive SVMs. Journal of Machine Learning Research, 2006,7(8): 1687-1712.
  • 6V Sindhwani, S Keerthi, O Chapelle. Deterministic annealing for semi-supervised kernel machines. In: Cohen WW, Moore A, eds. Proc. of the 23rd Int’l Conf. on Machine Learning. ACM, 2006. 841-848. http://dblp.uni-trier.de/rec/bibtex/conf/icml/SindhwaniK C06 [doi: 10.1145/1143844.1143950].
  • 7Chapelle O, Chi M, Zien A. A continuation method for semi-supervised SVMs. In: Cohen WW, Moore A, eds. Proc. of the 23rd Int’l Conf. on Machine Learning. ACM, 2006. 185-192. http://dblp.uni-trier.de/rec/bibtex/conf/icml/ChapelleCZ06 [doi: 10.1145/ 1143844.1143868].
  • 8De Bie T, Cristianini N. Semi-Supervised learning using semi-definite programming. In: Chapelle O, SchOelkopf B, Zien A, eds. Semi-supervised Learning. MIT Press, 2006.
  • 9Chapelle O, Sindhwani V, Keerthi S. Branch and bound for semi-supervised support vector machine. In: Sch61kopf B, Platt J, Hoffman T, eds. Proc. of the 20th Annual Conf. on Neural Information Processing Systems. Cambridge: MIT Press, 2006. 217-224. http://nips.cc/Conferences/2006/Committees/.
  • 10Wang HQ, Sun FC, Cai YN, Chen N, Ding LG. On multiple kernel learning methods. Acta Automatica Sinica, 2010,36(8): 1037-1050 (in Chinese with English abstract), [doi: 10.3724/SP.J.1004.2010.01037].

共引文献273

同被引文献46

引证文献2

二级引证文献10

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部