期刊文献+

共轭梯度型支撑向量机 被引量:1

Conjugate Gradients Support Vector Machine
原文传递
导出
摘要 求解支撑向量机的二次规划有不同的变形.对于线性问题.从一个变形出发,利用 Lagrangian 对偶技巧,将特征空间的高维二次规划问题转化为输入空间的低维无约束、可微凸的对偶规划.针对目标函数的分片二次特征,结合快速精确的一维搜索技术,提出共轭梯度型支撑向量机来求解该问题.利用 Cholesky 分解或非完全(in-complete)Cholesky 分解方法分解核矩阵,在算法复杂度增加很少的条件下可实现基于核函数的非线性分类.该算法可以在普通计算机上快速求解上百万规模的线性训练问题和较大规模的非线性训练问题.大量数据实验和复杂度分析表明,该算法与同类算法如 ASVM、LSVM 相比是有效的. Support vector machines can be posed as quadratic programming problems in various ways. Using the technology of the Lagrangian dual, an unconstrained differentiable convex program problem is proposed as the dual of the quadratic programming, which is a simple reformulation on standard quadratic program of a linear support vector machine. The resulting problem minimizes a differentiable convex piecewise quadratic function in the input space, but not in the feature space. By the characteristic of the piecewise quadratic function and the combination with the speedy line search method, a Conjugate Gradients Support Vector Machine (CGSVM) is pro- posed to solve the unconstraint program problem quickly. After kernel matrix is factorized by the Cholesky factorization or incomplete Cholesky factorization, nonlinear classification problem with kernel function can also be solved by CGSVM with little increase of the complexity of the algo- rithms. CGSVM can be used to solve linear classification problems with millions of points and the nonlinear classification problems with three thousand points or more on normal PC. Many numerical experiments and complexity analysis demonstrate that the proposed algorithms are very efficient compared with the similar algorithms such as ASVM and LSVM.
出处 《模式识别与人工智能》 EI CSCD 北大核心 2006年第2期129-136,共8页 Pattern Recognition and Artificial Intelligence
基金 "十五"国家部委科技(电子)预研资助项目(No.413160501)
关键词 支撑向量机 共轭梯度法 Lagrangian对偶 核函数 Support Vector Machine, Conjugate Gradient Algorithms, Lagrangian Dual,Kernel Function
  • 相关文献

参考文献24

  • 1Vapnik V N. The Nature of Statislical Learning Theory. Berlin, Germany: Springer-Verlag, 1995
  • 2Vapnik V N. An Overview of Statistical Learning Theory. IEEE Trans on Neural Networks, 1999, 10(5): 988-999
  • 3Mangasarian O L. Generalized Support Vector Machines. In:Smola A, Bartlett P, Scholkopf B, Schuurmans D, eds. Advances in Large Margin Classifiers. Cambridge, USA: MIT Press, 2000, 135-146
  • 4Lee Y J, Mangasarian O L, SSVM: A Smooth Support Vector Machine. Computational Optimization and Applications, 2001,20(1): 5-22
  • 5Joachims T. Making Large-Scale SVM Learning Practical. In:Scholkopf B, et al, eds. Advances in Kernel Method-Support Vector Learning, Camhridge, USA: M1T Press, 1999, 169-184
  • 6Mangasarian O L, Musicant D R. Active Set Support Vector Machine Classification. In: Lee T K, Dietterich T G, Tresp V,eds. Neural Information Processing Systems. Cambridge,USA: MIT Press, 2001, 577 583
  • 7Mangasarian O L, Musicant D R. Lagrangian Support Vector Machines. Journal of Machine Learning Research, 2001, 1(3):161-177
  • 8Lin C J, Saigal R. An Incomplete Cholesky Factorization for Dense Matrices. BIT, 2000, 40(13): 536-558
  • 9Zhang Y. Solving Large-Scale Linear Programs by Interior-Point Methods under the MATLAB Environment. Optimization Methods and Software, 1998, 10(1): 1-31
  • 10Joachims T. SVM^light. 1998. http://svmlight. joachims. org/

二级参考文献16

  • 1陈开周.最优化计算方法[M].西安:西安电子科技大学出版社,1984.67-87.
  • 2Vapnik V N. The Nature of Statistical Learning Theory[M]. NY:Springer-Verlag, 1995. 北京:清华大学出版社,2000.
  • 3Vapnik V N. An Overview of Statistical Theory[J]. IEEE Trans. on Neural Network, 1999, 10(5):988-999.
  • 4Joachims T. Making Large-Scale SVM Learning Practical. Advances in Kernel Method-Support Vector Learning[M]. In B. Scholkopf et al.(ed). Advances in Kernel Method-Support Vector Learning, Cambridge, MA, MIT Press, 1999.
  • 5Platt J C. Fast Training of Support Vector Machines Using Sequential Minimal Optimization[M]. In B. Scholkopf et al. (ed), Advances in Kernel Method-Support Vector Learning, Cambridge, MA, MIT Press,1999. 185 - 208.
  • 6Mangasarian O L, Musicant D R. Successive Overrelaxation for Support Vector Machines[J]. IEEE Trans. on Neural Network, 1999,10(5):1032- 1037.
  • 7.[M].北京:清华大学出版社,2000..
  • 8张学工译.统计学习理论的本质[M].北京:清华大学出版社,2000..
  • 9V N.Vapnik,The Nature of Statistical Learning Theory,NY:Springer-Verlag,2000.(张学工译.统计学习理论的本质.北京:清华大学出版社,2000,9).
  • 10V. N. Vapnik, An overview of statistical learning theory.IEEE trans on Neural Network, 1999, 10 (5):988-999.

共引文献14

同被引文献26

  • 1周水生,周利华.训练支持向量机的低维Newton算法[J].系统工程与电子技术,2004,26(9):1315-1318. 被引量:9
  • 2周水生,詹海生,周利华.训练支持向量机的Huber近似算法[J].计算机学报,2005,28(10):1664-1670. 被引量:2
  • 3VAPNIK V N. The nature of statistical learning theory[ M]. New York : Springer- Verlag,2000.
  • 4VAPNIK V N. An overview of statistical learning theory [ J]. IEEE trans, on Neural Networks, 1999,10(5) : 988-999.
  • 5OSUNA E, FREUND R, GIROSI F. An improved training algorithm for support vector machines [ C ]// Proceedings of the 1997 IEEE Workshop on Neural Networks for Signal Processing, New York : IEEE Press, 1997:276-285.
  • 6JOACHIMS T. SVM^light. [ EB/OL]. [ 1998 - 02 - 15 ]. http:// svmlight.joachims.org/.
  • 7LIN C J. On the convergence of the decomposition method for support vector machines[J]. IEEE Transactions on Neural Networks,2001,12 (6) : 1288-1298.
  • 8PLATT J C. Fast training of support vector machines using sequential minimal optimization [ M ]. Cambridge: MA, MIT Press, 1999 : 185-208.
  • 9KEERTHI S S,SHEVADE S K, BHATrACHARYYA C,et al. Improvements to Platt's SMO algorithm for SVM classifier design [ J]. Neural Computation,2001,13 : 637-649.
  • 10TAKAHASHI N, NISHI T. Rigorous proof of termination of SMO algorithm for support vector machines [ J]. IEEE Transactions on Neural Network,2005, 16(3):774-776.

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部