期刊文献+

基于累积正样本的偏斜数据流集成分类方法

Classifier Ensemble for Imbalanced Data Stream Classification Based on Accumulated Minorities
下载PDF
导出
摘要 针对现有处理偏斜数据流的方法存在过拟合或者未充分利用现有数据这一问题,提出一种基于累积正样本的偏斜数据流集成分类方法 EAMIDS。该算法把目前达到的所有数据块的正样本收集起来生成集合AP,然后采用KNN算法和Over-sampling方法来平衡数据块的类分布。当基分类器数量超过最大值时,根据F-Measure值来更新集成分类器。通过在模拟数据集SEA和SPH上的实验,与IDSL算法和SMOTE算法相比,表明EAMIDS具有更高的准确率。 To solve the issue of over-fitting and not making full use of current data in existing methods of balancing imbalanced data stream,a method named EAMIDS for imbalanced data stream is proposed based on accumulated positive samples. In EAMIDS,positive samples from previous training chunks are accumulated to form the AP set which is used to balance the class distributions by making use of K nearest neighbors and Over-sampling technique. The ensemble classifier will be updated according to F-Measure when the number of the available base classifiers is greater than the fixed size of the ensemble classifier. Empirical study on both SEA dataset and SPH dataset shows that the proposed EAMIDS has substantial advantage over IDSL approach and SMOTE approach in prediction accuracy.
作者 郭文锋 王勇
出处 《计算机与现代化》 2015年第3期41-47,共7页 Computer and Modernization
基金 西北工业大学基础研究基金资助项目(JC201273)
关键词 偏斜数据流 累积正样本 集成分类器 概念漂移 imbalanced data streams accumulated positive samples ensemble classifiers concept drift
  • 相关文献

参考文献1

二级参考文献20

  • 1H Wang, et al. Mining concept-drifting data streams using ensemble classifiers[ A ]. Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining[C] .New York: ACM Press,2003.226- 235.
  • 2M Scholz, R Klinkenberg. An ensemble classifier for drifting concepts[ A]. Proceedings of the Second International Work- shop on Knowledge Discovery in Data Streams [ C]. Porto, Portugal: Springer,2005.53 - 64.
  • 3Wei Fan. Systematic data selection to mine concept - drifting data streams[A]. Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining[C] .New York: ACM Press,2004. 128- 137.
  • 4J Z Kolter, M A Maloof. Using additive expert ensembles to cope with concept drift [ A]. Proceedings of the 22nd International Conference on Machine Learning[C]. New York: ACM Press, 2005.449 - 456.
  • 5G M Weiss, F Provost. Learning when training data are costly: the effect of class distribution on tree induction[ J]. JOUlllal of Artificial Intelligence Research, 2003, (19) : 315 - 354.
  • 6N V Chawla, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, (16) :321 - 357.
  • 7G M Weiss. Mining with rarity: a unifying framework[ J]. ACM SIGKDD Explorations, 2004,6( 1 ) :8 - 19.
  • 8C Elkan. The foundations of cost - sensitive learning[A]. Proceedings of the 17th International Joint Conference on Artificial Intelligence[C]. Seattle, Washington, USA: Morgan Kaufinann Publishers Inc, 2001. 973 - 978.
  • 9M Ciraco, M Rogalewski, G Weiss. Improving classifier utility by altering the misclassification cost ratio[A]. Proceedings of the 1st International Workshop on Utility-based Data Mining [C] .New York: ACM Press,2005.46- 52.
  • 10C X Ling, V S Sheng. Cost-sensitive learning and the class imbalance problem [ A ]. Encyclopedia of Machine Learning M]. New York: Springer. 2008.

共引文献22

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部