期刊文献+

基于待测样本标记的加速K-NN分类方法 被引量:1

Speeding K-NN Classification Method Based on Testing Sample Label
下载PDF
导出
摘要 针对传统K-NN分类方法预测效率低的问题,提出一种基于待测样本标记的加速K-NN分类(Speeding K-NN Classification Based on Testing Sample Label,KNN_TSL)方法。该方法首先采用传统K-NN分类方法得到一定数量的待测样本类别;然后对于再进入的待测样本,计算其与已标记类别待测样本的距离,如果该距离小于给定的阈值,则将该新进入的样本赋予相同的类别标签,反之则重新分类。这种方法对于后续进入的易分类待测样本,只需要计算其与少数比原始标记样本更有代表性的已标记待测样本的距离即可进行类别决策,而只有少数的待测样本需要重新分类。由于已标记待测样本包含了部分类别信息,因此采用这种方法可以在大大提高分类预测效率的同时保证模型的泛化性能。实验结果表明,本文提出的KNN_TSL方法能够获得较高的样本预测速度和较好的预测准确率。 To solve the problem of the low prediction efficiency of traditional K-NN classification,this paper presents a speeding K-Nearest Neighbor( K-NN) classification method based on testing sample label( KNN_TSL). Firstly,a certain number of testing samples is obtained by traditional K-NN classification method. Then for the samples to be entered latterly,the distance between the labeled samples and the testing sample is calculated. If the distance is less than a given threshold,the new entry sample is assigned the same class label. Otherwise,the K-NN classification is performed. By this method,most last easily classified samples can be decided only by considering the relationship of it with the labeled testing samples,and only a small number of samples is reclassified. Because the labeled samples contain some information of class,this method can greatly improve the classification prediction efficiency and ensure the generalization performance. The experiment result demonstrates that the proposed KNN_TSL model can obtain the high learning efficiency and testing accuracy simultaneously.
作者 王晓 赵丽
出处 《计算机与现代化》 2017年第9期102-105,共4页 Computer and Modernization
关键词 K-NN分类 待测样本标记 KNN_TSL方法 K-NN classification testing sample label KNN_TSL algorithm
  • 相关文献

参考文献2

二级参考文献11

  • 1Bohm C, Krebs F. The k-nearest neighbor join: Turbo charging the KDD process. Knowledge Information System, 2004,6(6): 728-749. [doi: 10.1007/s10115-003-0122-9].
  • 2Xia CY, Lu HJ, Coi BC, Hu J. Gorder: An efficient method for KDD joins processing. In: Proc. of the 30th Int'l Conf. on Very Large Data Bases (VLDB). 2004. 756-767.
  • 3Yao B, Li FF, Kumar P. K nearest neighbor queries and KNN-joins in large relational databases (almost) for free. In: Proc. of the 26th Int'l Conf. on Data Engineering (ICDE). 2010.4-15. [doi: 10.1109/ICDE.2010.5447837].
  • 4Yu C, Cui B, Wang SG, Su JW. Efficient index-based KNN join processing for high-dimensional data. Information and Software Technology, 2007,49(4):332-344. [doi: 10.1016/j.infsof.2006.05.006].
  • 5Dean J, Ghemawat S. MapReduce: Simplified data processing on large clusters. Communications of the ACM, 2008,51(1):107-113 [doi: 10.1145/1327452.1327492].
  • 6White T. Hadoop: The Definitive Guide. Sebastopol: Yahoo! Press, 2009.
  • 7Zhang C, Li FF, Jestes J. Efficient parallel kNN joins for large data in MapReduce. In: Proc. of the 15th Int'l Conf. on Extending Database Technology (EDBT). 2012.38-49. [doi: 10.1145/2247596.2247602].
  • 8Lu W, Shen YY, Chen S, Col BC. Efficient processing of k nearest neighbor joins using MapReduce. In: Proc. of the 38th lnt'l Conf. on Very Large Data Bases (VLDB). 2012. 1016-1027.
  • 9Liu Y, Jing N, Chen L, Chen HZ. Parallel bulk-loading of spatial data with MapReduce: An R4ree case. Wuhan University Journal of Natural Sciences, 2011,16(6):513-519. [doi: 10.1007/s11859-011-0790-3].
  • 10Tao YF, Papadias D. Range aggregate processing in spatial databases. IEEE Trans. on Knowledge and Data Engineering, 2004, 16(12):1555-1570. [doi: 10.1109/TKDE.2004.93].

共引文献128

同被引文献11

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部