期刊文献+

基于广义信息论的决策森林数据挖掘模型

Data mining model of decision forest based on generalized informaion theory
下载PDF
导出
摘要 针对模式识别中的多分类器集成,通过挖掘测试样本特征属性的相关性,结合训练集的条件独立性分析对每个样本赋予分类规则,构造分类森林(而非单个决策树)进行模型集成。整个学习过程能够自适应确定各决策树结构和数量,并充分发挥集成模型的伸缩性和扩展性。在UCI机器学习数据集上的实验结果验证了本方法的有效性。 For the multiple classifier integration in a decision tree was built to realize the submodel attributes in the test sample and conditional independence analysis trees can be defined adaptively d the pattern recognition, integration by mining th a decision forest rather than e relevance in the predictive giving the distinct classification rule to each sample based on the of the training set. The structure and the number of the decision uring the learning process. Experiments on UCI learning data sets proved the feasibility and effectiveness of the proposed method.
出处 《吉林大学学报(工学版)》 EI CAS CSCD 北大核心 2010年第1期155-158,共4页 Journal of Jilin University:Engineering and Technology Edition
基金 国家自然科学基金项目(60275026 60803055)
关键词 人工智能 模式识别 决策森林 条件独立性假设 数据挖掘模型 artificial intelligence pattern recognition decision forest conditional independence assumption data mining model
  • 相关文献

参考文献8

  • 1Freund Y, Schapire R E. A decision-theoretic generalization of on line learning and an application to boosting[J]. Journal of Compulcr and System Sciences, 1997, 55(1):119-139.
  • 2Ratsch G, Warmuth M K. Marginal boosting[R]. NeuroCOLT2 Technical Report 97, London: Royal- Holloway College, 2001: 287-310.
  • 3Ratsch G, Onoda T, Muller K R. Soft margins for AdaBoost[J]. Machine Learning, 2001,42(3): 287-320.
  • 4Breiman L. Bagging predictors[J]. Machine Learning, 1991, 6(2): 123-140.
  • 5Wolpert D. Stacked generalization[J]. Neural Net works, 1992, 5(2):241-260.
  • 6Ting K M, Wittern I H. Issues in stacked generalization[J]. Journal of Artificial Intelligence Research, 2003, 10:271-289.
  • 7WANGLimin YUANSenmiao.Induction of hybrid decision tree based on post-discretization strategy[J].Progress in Natural Science:Materials International,2004,14(6):541-545. 被引量:5
  • 8Wang L M, Li X L, CaoC H, et al. Combining decision tree and naive Bayes for classification[J]. International Journal of Knowledge-based Systems, 2006, 19(7): 511-515.

二级参考文献11

  • 1[1]Quinlan, J. R. Induction of decision trees. Machine Learning,1986, 81.
  • 2[2]Quinlan, J. R. CA. 5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann, 1993.
  • 3[3]Quinlan, J. R. Improved use of continuous attributes in C4.5.Journal of Artificial Intelligence Research, 1996, 4, 77.
  • 4[4]Breiman, L. et al. Classification and regression trees. Statistics Probability Series, Wadsworth, Belmont, 1984.
  • 5[5]McCallum, A. K. et al. A comparison of event models for naive bayes text classification. In: Proc. of AAAI-98 Workshop on Learning for Text Categorization, Madison, WI, 1998, 41.
  • 6[6]Kohavi, R. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In: Proc. of the 2nd International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA,1996, 202.
  • 7[7]Zhou, Z. H. et al. Extracting symbolic rules from trained neural network ensemble. A1 Communications. 2003, 16(1): 3.
  • 8[8]Dougherty, J. et al. Supervised and unsupervied discretization of coninuous features. In:Proc. of the 12th International Conference on Machine Learning, San Francisco: Morgan Kaufmann Publishers, 1995, 194.
  • 9[9]Silverman, B. W. Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability, 1986.
  • 10[10]Smyth, P. et al. Retrofitting decision tree classifiers using kernel density estimation. In: Proc. of the 12th International Conference on Machine Learning, San Francisco: Morgan Kaufmann Publishers, 1995, 506.

共引文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部