期刊文献+

基于近邻边缘检测的支持向量机 被引量:1

Support Vector Machine Based on Neighbor Edge Detection
下载PDF
导出
摘要 针对标准支持向量机方法需要存储、计算和处理核矩阵而学习效率很低,不能有效处理较大规模数据挖掘的问题,提出一种基于近邻边缘检测的支持向量机方法 (SVM Method Based on Neighbor Edge Detection,ED_SVM)。该方法将近邻边缘检测技术引入SVM的训练过程,即首先对数据进行划分,选择混合类样本,通过边缘检测技术提取其中位于近似最优分类边界附近的含有较多重要支持向量信息的样本,构成新的小规模训练集,以在压缩训练集的同时保持原始支持向量信息的分布特性;并在新构成的训练集上训练标准SVM,在提高SVM学习效率的同时得到优秀的泛化性能。实验结果表明,本文提出的ED_SVM方法能够同时获得较高的测试精度和学习效率。 This paper presents a Support Vector Machine( SVM) method based on neighbor edge detection,called Support Vector Machine based on Neighbor Edge Detection( ED_SVM),in order to solve the problem that there is low training efficiency and it can not solve the large scale data mining problems of normal SVM,because it needs save,compute and solve the large kernel matrix. By dividing data and obtaining the mixed clusters,this method extracts the important samples near the approximate optimal hyperplane by introducing neighbor edge detection technology into the SVM training process,which have the most important support vector information. The new training samples set is constructed by these new important samples to keep the distribution feature of original support vectors and compress the size of training dataset. Then the normal SVM is trained on these new training samples and the good generalization performance can be obtained with high learning efficiency synchronously. The experiment results demonstrate that the proposed ED_SVM model can obtain the high learning efficiency and testing accuracy simultaneously.
出处 《计算机与现代化》 2015年第3期15-19,25,共6页 Computer and Modernization
关键词 支持向量机 边缘检测 支持向量 泛化性能 学习效率 ED_SVM算法 support vector machine edge detection support vector generalization performance learning efficiency ED_SVM algorithm
  • 相关文献

参考文献3

二级参考文献88

  • 1刘经南.泛在测绘与泛在定位的概念与发展[J].数字通信世界,2011(S1):28-30. 被引量:31
  • 2GUILLAUMIN M, MENSINK T, VERBEEK J, et al. TagProp : discrim- inative metric learning in nearest neighbor models for image auto-an- notation[ C ]//Proc of International Conference of Computer Vision. 2009:309-316.
  • 3CLARE A, KING R D. Knowledge discovery in multi-label phenotype data [ C]//kecture Notes in Computer Science,vol 2168. 2001:42-53.
  • 4ELISSEEFF A,WESTON J. A kernel method for multi-labeled classifica- tion [ C]//Proc of Annual ACM Conference on Research and I)evelop- ment in Information Retrieval. New York:ACM Press.2005:274-281.
  • 5COMITE F D, GILLERON R,TOMMASI M. Learning multi-label al- ternating decision tree from texts and data [ C ]//Lecture Notes in Computer Science, vol 2734. 2003:35-49.
  • 6SCHAPIRE R E, SINGER Y. BoosTexter:a boosting-based system for text categorization [ J ]. Machine Learning, 2000, 39 ( 2/3 ) : 135- 168.
  • 7ELISSEEFF A, WESTON J. A kernel method for multi-labeled classi- fication [ C ]//Advances in Neural Information Processing Systems. Cambridge : MIT Press,2002:681 -687.
  • 8ZHANG Min-ling, ZHOU Zhi-hua ML-KNN : a lazy learning approach to multi-label learning [ J ]. Parttam Recognition, 2007,40 ( 7 ) : 2038-2048.
  • 9CHEN M S,HAN J H,YU P S. Data mining:an overview from a data- base perspective [ J]. IEEE Trans on Knowledge and Data Engi-.
  • 10MOTWANI R, RAGHAVAN P. Randomized algorithms [ M]. Cam- bridge: Cambridge University Press, 1995.

共引文献767

同被引文献9

引证文献1

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部