期刊文献+

一般贝叶斯网络分类器及其学习算法 被引量:9

Algorithm for exact recovery of Bayesian network for classification
下载PDF
导出
摘要 贝叶斯网络(BN)应用于分类应用时对目标变量预测有直接贡献的局部模型称做一般贝叶斯网络分类器(GBNC)。推导GBNC的传统途径是先学习完整的BN,而现有推导BN结构的算法限制了应用规模。为了避免学习全局BN,提出仅执行局部搜索的结构学习算法IPC-GBNC,它以目标变量节点为中心执行广度优先搜索,且将搜索深度控制在不超过两层。理论上可证明算法IPC-GBNC是正确的,而基于仿真和真实数据的实验进一步验证了其学习效果和效率的优势:a)可输出和执行全局搜索的PC算法相同甚至更高质量的结构;b)较全局搜索消耗少得多的计算量;c)同时实现了降维(类似决策树学习算法)。相比于绝大多数经典分类器,GBNC的分类性能相当,但兼具直观、紧凑表达和强大推理的能力(且支持不完整观测值)。 General Bayesian network classifier( GBNC) was the effective local section of the Bayesian network( BN) facing classification problem. Conventionally,it had to learn the global BN first,and existing structure learning algorithm imposed restriction on possible problem scale. The paper developed an algorithm called IPC-GBNC for the exact recovery of GBNC with only local search. It conducted a breadth-first search with depth no more than 2 given the class node as the center. It proved its soundness,and experiments on synthetic and UCI real-world datasets demonstrate the merits of IPC-GBNC over classical PC algorithm which conducted global search: a) it produces same as or even higher quality of structure than PC,b) it saves considerable computation over PC,and c) effective dimension reduction is realized. As compared with state-of-the-art classifiers,GBNC not only performs as well on prediction,but inherits merits from being graphical model,like compact representation and powerful inference ability.
出处 《计算机应用研究》 CSCD 北大核心 2016年第5期1327-1334,共8页 Application Research of Computers
基金 国家自然科学基金资助项目(61305058 61300139 61102163) 厦门科技计划基金资助项目(3505Z20133027) 华侨大学科研基金资助项目(11Y0274 12HJY18) 中央高校基本科研基金资助项目(11J0263)
关键词 贝叶斯网络 马尔可夫毯 贝叶斯分类器 结构学习 特征选择 局部搜索 Bayesian network Markov blanket Bayes classifier structure learning feature selection local search
  • 相关文献

参考文献25

  • 1Koller D, Friedman N. Probabilistic graphical models:principles and techniques[ M ]. Cambridge : MIT Press ,2009.
  • 2Koller D, Sahami M. Toward optimal feature selection [ C ]//Proc of the 13th International Conference on Machine Learning. [ S. 1. ] :Mar- gala Kanfmann, 1996.
  • 3Fu S, Minn S, Desmarais M C. A survey on advances in Markov blan- ket induction algorithms[ C ]//Proc of ICNC-FSKD. 2014.
  • 4Tsamardinos I,Aliferis C F, Statnikov A R. Algorithms for large scale Markov blanket discovery[ C ]//Proc of the 16th International Flairs Conference. 20133:376-380.
  • 5Pena J M, Nilsson R, BjiSrkegren J ,et al. Towards scalable and data ef- ficient learning of Marker boundaries [ J ]. International doumal of Approximate Reasoning,2007,45(2) :211-232.
  • 6Fu S, Desmarais M C. Local learning algorithm for Markov blanket dis- covery[ C ]//Pine of Australian Conference on Artificial Intelligence. 2007.
  • 7Duda R O, Hart P E. Pattern classification and scene analysis [ M ]. [ S. 1. ] :Wiley, 1973:512.
  • 8Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers [J]. Machine Learning,1997,29(2) :131-163.
  • 9Zhang H, Jiang Liangxiao, Su Jiang. Hidden naive Bayes[ C ]//Proc of the 20th National Conference on Artificial Intelligence. [ S. 1. ] : AAAI Press, 2005 : 919 - 924.
  • 10Chickering D M, Geiger D, Heckennan D. Learning Bayesian network is NP-hard[ R]. [ S. 1. ] : Microsoft, 1994:22.

二级参考文献42

  • 1Han J, Kamber M. Data Mining: Concepts and Techniques. 2nd Edition. San Francisco, CA: Morgan Kaufmann, 2005.
  • 2Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Machine Learning, 1997, 29(2/3): 131-163.
  • 3Greiner R, Zhou W. Structural extension to logistic regres- sion= Discriminative parameter learning of belief net classifi- ers//Proceedings of the 18th Annual National Conference on Artificial Intelligence ( AAAI 2002). Edmonton, Canada, 2002:167-173.
  • 4Greiner R, Su X, Shen B et al. Structural extension to logis- tic regression: Discriminative parameter learning of belief net classifiers. Machine Learning, 2005, 59(3): 297-322.
  • 5Chickering D M, Heckerman D, Meek C. Large sample learning of Bayesian networks is NP-hard. The Journal of Machine Learning Research, 2004, 5 : 1287-1330.
  • 6Tillman R E. Structure learning with independent non-identi cally distributed data//Proceedings of the 26th Annual Inter national Conference on Machine Learning. New York, 2009: 1041-1048.
  • 7Zheng Z, Webb G I. Lazy learning of Bayesian rules. Machine Learning, 2000, 41(1): 53-84.
  • 8Keogh E J, Pazzani M J. Learning the structure of augmen- ted Bayesian classifiers. International Journal on Artificial In telligence Tools, 2002, 11(4): 587-601.
  • 9Webb G I, Boughton J R, Wang Z. Not so naive Bayes: Ag- gregating one-dependence estimators. Machine Learning, 2005, 58(1): 5-24.
  • 10Naele A, Dejori M, Stetter M. Bayesian substructure learn- ing- Approximate learning of very large network strue tures//Proceedings of the 18th European Conference on Ma- chine Learning (ECML, 2007). Warsaw, Poland, 2007: 238-249.

共引文献28

同被引文献54

引证文献9

二级引证文献32

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部