期刊文献+

学习理论的一个关键算法的稀疏逼近

Sparse Approximation to a Key Algorithm of Learning Theory
下载PDF
导出
摘要 Poggio和Smale最近提出的学习理论的一个关键算法(Akeyalgorithm,KA)可用于非线性分类和回归,并避免求解二次规划,但几乎所有的样本是"支持向量"。为此提出了一种稀疏KA算法(SKA),通过设计特定的优化函数,SKA能有效减少"支持向量",并具备良好的推广能力。将SKA应用于两个实际的模式识别问题,并与支持向量机(SVM)进行比较,验证了SKA的有效性。 A key algorithm (KA) of learning theory presented recently by Poggio and Smale is claimed to be capable of both nonlinear classification and regression. It avoids the hard quadratic programming, but suffers from the fact that nearly all the training samples are 'support vectors'. To impose sparsity to KA, a sparse KA algorithm(SKA) is put forward, which can effectively cut off 'support vectors'and meanwhile keep good generalization capacity. With comparison to SVM, the superiority of SKA is demonstrated on two UCI datasets.
出处 《华东理工大学学报(自然科学版)》 EI CAS CSCD 北大核心 2004年第6期688-693,共6页 Journal of East China University of Science and Technology
基金 国家重点基础研究发展规划项目(2002CB312200) 国家自然科学基金项目(69974014)
关键词 一个关键算法 稀疏逼近 支持向量机 正则化 二次损失函数 a key algorithm sparse approximation support vector machines regularization squared loss function
  • 相关文献

参考文献11

  • 1Poggio T, Smale S. The mathematics of learning: Dealing with data [J]. Notice of American Mathematical Society,2003,50( 5 ): 537-544.
  • 2Suykens J A K, Gestel T V, Brabanter J De, et al. Least Squares Support Vector Machines [M]. Singapore: World Scientific, 2002.
  • 3Fung G, Mangasarian O L. Proximal support vector machine classifiers[A]. In KDD 2001:Seventh ACM SIGKDD Inter national Conference on Knowledge Discovery and Data Mining[C]. New York:ACM Press,2001.77-86.
  • 4Cucker F, Samle S. On the mathematical foundations of learning [J]. Bulletin of American Mathematical Society,2001, (39): 1-49.
  • 5Cucker F, Smale S. Best choices for regularization parameters in learning theory: On the bias-variance problem[J].Foundations of Computational Mathematics, 2002,2 (4): 413-428.
  • 6Evgenio T, Pontil M, Poggio T. Regularization networks and support vector machines[J]. Advances in Computational Mathematics ,2000, (13) :1-50.
  • 7Wahba G. An introduction to reproducing kernel hilbert spaces and why they are so useful [EB/OL]. Technical Report, SYSID 2003, Rotterdam, http://www. stat. wisc.edu/-wahba/talks 1/rotter. 03/sysid1. pdf
  • 8Suykens J A K, Brabanter J De, Lukas L, et al. Weighted least squares support vector machines:Robustness and sparse approximation[J]. Neurocomputing, 2002, (48): 85-105.
  • 9Cawley C C, Talbot N L C. Improved sparse least square support vector machines [J]. Nerocomputing, 2002, (48):1 025-1 031.
  • 10Mangasarian O L, Wolberg W H. Cancer diagnosis via linear programming[J]. SIAM News, 1990,23(5): 1-18.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部