期刊文献+

基于Markov blanket的无约束型K阶贝叶斯集成分类模型 被引量:2

Ensemble of unrestricted K-dependence Bayesian classifiers based on Markov blanket
下载PDF
导出
摘要 为了提升K阶依赖贝叶斯分类(KDB)模型的条件依赖表达能力,本文以Markov blanket的特征提取思想为基本原则,降低特征属性间的条件独立性,根据贪婪搜索策略进行贝叶斯分类模型的结构学习。基于训练样本集构建宏观模型,基于测试样本构建微观模型,最终通过集成模型进行决策。针对UCI机器学习数据集进行交叉验证,实验结果分别从0-1损失、偏差和方差等角度证明了本文算法的合理性和有效性。 In order to promote the conditional dependence expression ability of K-dependence Bayesian Classifier,in this paper we relax the conditional independence between predictive features according to the basic idea of Markov blanket for feature extraction,and carry out structure learning based on greedy search strategy.We propose a macro model from the training set,and a micro model from each test sample.The decision is made by the ensemble of both models.We select data sets from UCI machine learning repository for cross validation.The experimental results demonstrate the rationality and effectiveness of the proposed algorithm in terms of 0-1 loss,bias and variance.
作者 王利民 刘洋 孙铭会 李美慧 WANG Li-min;LIU Yang;SUN Ming-hui;LI Mei-hui(College of Computer Science and Technology,Jilin University,Changchun 130012,China)
出处 《吉林大学学报(工学版)》 EI CAS CSCD 北大核心 2018年第6期1851-1858,共8页 Journal of Jilin University:Engineering and Technology Edition
基金 国家自然科学基金项目(61272209) 吉林省自然科学基金项目(20150101014JC)
关键词 计算机应用 贝叶斯网络 Markovblanket 条件独立性 宏观模型 微观模型 computer application Bayesian network Markov blanket conditional independence macro model micro model
  • 相关文献

参考文献1

二级参考文献11

  • 1[1]Quinlan, J. R. Induction of decision trees. Machine Learning,1986, 81.
  • 2[2]Quinlan, J. R. CA. 5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann, 1993.
  • 3[3]Quinlan, J. R. Improved use of continuous attributes in C4.5.Journal of Artificial Intelligence Research, 1996, 4, 77.
  • 4[4]Breiman, L. et al. Classification and regression trees. Statistics Probability Series, Wadsworth, Belmont, 1984.
  • 5[5]McCallum, A. K. et al. A comparison of event models for naive bayes text classification. In: Proc. of AAAI-98 Workshop on Learning for Text Categorization, Madison, WI, 1998, 41.
  • 6[6]Kohavi, R. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In: Proc. of the 2nd International Conference on Knowledge Discovery and Data Mining, Menlo Park, CA,1996, 202.
  • 7[7]Zhou, Z. H. et al. Extracting symbolic rules from trained neural network ensemble. A1 Communications. 2003, 16(1): 3.
  • 8[8]Dougherty, J. et al. Supervised and unsupervied discretization of coninuous features. In:Proc. of the 12th International Conference on Machine Learning, San Francisco: Morgan Kaufmann Publishers, 1995, 194.
  • 9[9]Silverman, B. W. Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability, 1986.
  • 10[10]Smyth, P. et al. Retrofitting decision tree classifiers using kernel density estimation. In: Proc. of the 12th International Conference on Machine Learning, San Francisco: Morgan Kaufmann Publishers, 1995, 506.

共引文献4

同被引文献16

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部