期刊文献+

大规模SVDD的坐标下降算法 被引量:1

Coordinate Descent Algorithms for Large-Scale SVDD
原文传递
导出
摘要 支持向量数据描述(SVDD)是一种无监督学习算法,在图像识别和信息安全等领域有重要应用.坐标下降方法是求解大规模分类问题的有效方法,具有简洁的操作流程和快速的收敛速率.文中针对大规模SVDD提出一种高效的对偶坐标下降算法,算法每步迭代的子问题都可获得解析解,并可使用加速策略和简便运算减少计算量.同时给出3种子问题的选择方法,并分析对比各自优劣.实验对仿真和真实大规模数据库进行算法验证.与LibSVDD相比,文中方法更具优势,1.4s求解105样本规模的ijcnn文本库. Support vector data description (SVDD) is an unsupervised learning method with significant application in image recognition and information security. Coordinate descent is an effective method for large-scale classification problems with simple operation and high convergence speed. In this paper, an efficient coordinate descent algorithm for solving large-scale SVDD is presented. The solution of concerned sub-problem at each iteration is derived in closed form and the computational cost is decreased through the accelerating strategy and cheap computation. Meanwhile, three methods for selecting sub-problem, analyzing and comparing their advantage and disadvantage are developed. The experiments on simulation and real large-scale database validate the performance of the proposed algorithm. Compared with LibSVDD, the proposed algorithm has great superiority which takes less than 1.4 seconds to solve a text database from ijcnn with 105 training examples.
出处 《模式识别与人工智能》 EI CSCD 北大核心 2012年第6期950-957,共8页 Pattern Recognition and Artificial Intelligence
基金 国家自然科学基金资助项目(No.61273296 60975040)
关键词 支持向量数据描述(SVDD) 收敛速率 坐标下降 解析解 Support Vector Data Description (SVDD), Convergence Speed, Coordinate Descent,Closed-Form Solution
  • 相关文献

参考文献17

  • 1Tax D M J, Duin R P W. Support Vector Domain Description. Pat-tern Recognition Letters, 1999, 20(11/12/13) : 1191-1199.
  • 2Sch:lkopf B, Platt J, Shawe-Taylor J, et al. Estimating the Support of a High-Dimensional Distribution. Neural Computation, 2001, 13 (7) : 1443-1471.
  • 3Tax D M J, Duin R P W. Support Vector Data Description. Ma- chine Learning, 2004,54:45-66.
  • 4Tsang I W, Kwok J T, Cheung P M. Core Vector Machines: Fast SVM Training on Very Large Data Sets. Journal of Machine Learning Research, 2005, 6 : 363-392.
  • 5Loosli G, Canu S. Comments on the "Core Vector Machines: Fast SVM Training on Very Large Data Sets". Journal of Machine Learn- ing Research, 2007, 8:291-301.
  • 6Hsieh C J, Chang K W, Lin C J, et al. A Dual Coordinate Descent Method for Large-Scale Linear SVM//Proc of the 25th International Conference on Machine Learning. Helsinki, Finland, 2008 : 408- 415.
  • 7Zhang T. Solving Large Scale Linear Prediction Problems Using Sto- chastic Gradient Descent Algorithms//Proc of the 21 st International Conference on Machine Learning. Haifa, Israel, 2004 : 919-926.
  • 8Shalev-Shwartz S, Singer Y, Srebro N. Pegasos: Primal Estimated Sub-Gradient Solver for SVM// Proc of the 24th International Con- ference on Machine Learning. Corvallis, USA, 2007:807-814.
  • 9Bordes A, Bottou L. Careful Quasi-Newton Stochastic Gradient Descent. Journal of Machine Learning Research, 2009, 10: 1737- 1754.
  • 10Joachims T. Training Linear SVMs in Linear Time// Proc of the 12th ACM SIGKDD International Conference on Knowledge Discov- ery and Data Mining. Philadelphia, USA, 2006:82-95.

同被引文献17

  • 1Dietterich T G, Bakiri G. Solving multi-class learning problems via error-correcting output cndes[J]. Journal of Artificial In- telligence Research, 1995, 34(2):263 - 286.
  • 2Bagheri M A, Qigang G, Escaler S. A genetic-based subspace ana- lysis method for improving erroceorrecting output coding[J]. Pat- tern Recognltion, 2013, 46(5) : 2830 - 2839.
  • 3Miguel A B, Escaler S, Xavier B, et al. On the design of an ECOC-eompliant genetic algorithm[J]. Pattern Recognition, 2014, 47 (8) :865 - 884.
  • 4Escaler S, David M. Online error correcting output codes[J]. Pattern Recognition Letters, 2011,32 ( 1 ) : 458 - 467.
  • 5Bouzas D, Arvanitopoulos N, Anastasios T. Optimizing linear discriminant error correcting output codes using particle swarm optimization[J]. Lecture Notes in Computer Science, 2011, 6792 (4):79-86.
  • 6Crammer K, Singer Y. On the learnability and design of output codes for multiclass problems[C]//Proc, of the 13th Annual Con- ference on Computational Learning Theory, 2000:896 - 909.
  • 7Pujol O, Radeva P, Vitria J. Discriminate ECOC: a heuristic method for application dependent design of error correcting out- put codes[J]. IEEE Trans. on Pattern Analysis and Machine In- telligence, 2006, 28(6) : 1001 - 1007.
  • 8Escalera S, David M J Tax, Pujnl O, et al. Subclass problem- dependent design for error-correcting output codes[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2008, 30 (6):1041 - 1054.
  • 9Bagheri M A, Montazer G A. A subspace approach to error cor- recting output codes [ J ]. Pattern Recognition Letters, 2013, 34 (1):176 -184.
  • 10Escalera S, Pujol O. Re-coding ECOCs without re-training[J]. Pattern Recognition Letters, 2013, 31(5) : 555 - 562.

引证文献1

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部