期刊文献+

基于K折交叉验证的选择性集成分类算法 被引量:54

K-Fold Cross-Validation Based Selected Ensemble Classification Algorithm
下载PDF
导出
摘要 针对传统选择性集成方法确定个体分类器权重参数不准确、计算复杂度较高的不足,提出了一种基于K折交叉验证的选择性集成分类算法。该算法首先采用集成学习思想训练一定数目的分类器,然后对每一个分类器设定权重参数初值,并利用交叉验证思想确定对应最大平均分类准确率的参数作为最终的个体分类器的权重因子,最后将权重小于某个预设阈值的分类器剔除,完成选择性集成学习。由于交叉验证方法可以较快并且较为精确地进行权重参数的确定,所以本算法可以有效地提高选择性集成方法的分类性能。在UCI标准数据集上的仿真实验充分证明了本算法的有效性。 As traditional selected ensemble method can not determine individual classifier weight parameters accurately, and has a high computational complexity, this paper proposes a K-fold cross-validation based selected ensemble classification algorithm. The algorithm firstly uses ensemble learning strategy to train a certain number of classifiers, and then sets the fight initial value of the weight parameter for each classifier, and use cross-validation strategy to determine the fight weight parameters corresponding to the maximum average classification precision. Finally, remove the classifiers whose weight parameter is less than a preset threshold to complete the ensemble learning. As cross-validation method can determine the weight parameters faster and more accurately, so the proposed algorithm can effectively improve the classification performance of the selected ensemble method. The simulation experiments on the UCI standard data sets fully prove the effectiveness of the proposed algorithm.
出处 《科技通报》 北大核心 2013年第12期115-117,共3页 Bulletin of Science and Technology
基金 国家自然科学基金(61100167) 江苏省自然科学基金(BK2011204)
关键词 选择性集成 交叉验证 分类器 权重参数 selected ensemble cross-validation classifier weight parameters
  • 相关文献

参考文献7

二级参考文献36

  • 1L.K. Hansen, P. Salamon. Neural network ensembles. IEEE Trans. Pattern Analysis and Machine Intelligence, 1990, 12(10): 993~1001.
  • 2P. Sollich, A. Krogh. Learning with ensembles: How over-fitting can be useful. In: D. Touretzky, M. Mozer, M. Hasselmo, eds.Advances in Neural Information Processing Systems, Vol 8.Cambridge, MA: MIT Press, 1996. 190~196.
  • 3L. Breiman. Bagging predictors. Machine Learning, 1996, 24(2): 123~140.
  • 4Y. Freund, R. Schapire. Experiments with a new boosting algorithm. In: Proc. the 13th Int'l Conf. Machine Learning.Bari, Italy: Morgan Kaufmann, 1996.
  • 5A. Krogh, J. Vedelsby. Neural network ensembles, cross validation, and active learning. In: G. Tesauro, D. S.Touretzky, T. K. Leen, eds. Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press, 1995. 231~238.
  • 6T. Dietterich, G. Bakin. Solving multiclass learning problems via error-correcting output codes. Journal of AI Research, 1995, 2,263~ 286.
  • 7N. C. Oza, K. Tumer. Dimensionality reduction through classifier ensembles. NASA Ames Research Center, Tech. Rep.:NASA-ARC- IC-1999-126, 1999.
  • 8N. C. Oza, K. Tumer. Input decimation ensembles:Decorrelation through dimensionality reduction. In: J. Kittler, F.Roli, eds. Multiple Classifier Systems. Second InternationalWorkshop (MCS 2001), LNCS 2096. Berlin: Springer, 2001.238~ 247.
  • 9Z. Zheng, G. Webb. Integrating boosting and stochastic attribute selection committees for future improving the performance of decision tree learning. The 10th IEEE ICTAI, Los Alamitos,1998.
  • 10Z.H. Zhou, J. X. Wu, W. Tang. Ensembling neural networks:Many could be better than all. Artificial Intelligence, 2002, 137(1-2): 239~263.

共引文献48

同被引文献533

引证文献54

二级引证文献207

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部