期刊文献+

基于推理信息量的BN参数学习变量离散化方法

Discretization Method of BN Parameter Learning Variable Based on Reasoning Information
下载PDF
导出
摘要 提出推理信息量的概念,将其作为贝叶斯网络连续变量离散化评价标准。在连续变量离散化的过程中,采用遗传算法寻求最优解,设计个体编码方式、交叉算子和变异算子,将推理信息量作为衡量个体适应度的标准。实例分析证明,通过该方法对变量进行离散化后学习得到的贝叶斯网络在推理时能得到更大的推理信息量。 The concept of reasoning information is presented, which is used as the measure of discretization of continuous variables in Bayesian Network(BN). Genetic algorithm is used to search the best solution. Encoding method, crossover operator and mutation operator is proposed. Reasoning information is used as the function of individual fitness. Experiment proves that the Bayesian Network learning from data based on this discretization method can get more reasoning information.
出处 《计算机工程》 CAS CSCD 北大核心 2009年第5期185-187,199,共4页 Computer Engineering
关键词 参数学习 推理信息量 离散化方法 遗传算法 parameter learning reasoning information: discretization method genetic algorithm
  • 相关文献

参考文献5

  • 1Pearl J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[M]. Saan Mateo. CA: Morgan: Kaufmann, Inc., 1988.
  • 2王飞,刘大有,薛万欣.基于遗传算法的Bayesian网中连续变量离散化的研究[J].计算机学报,2002,25(8):794-800. 被引量:14
  • 3Friedman N, Goldszmidt M. Discretization of Continuous Attributes while Learning Bayesian Networks[C]//Proc. of the 13th International Conference on Machine Learning. Bari, Italy: [s. n.],1996: 157-165.
  • 4Chickering D, Heckerman D. Efficient Approximations for the Marginal Likelihood of Bayesian Network with Hidden Variables[R]. Redmond, WA, USA: Microsoft Research. Tech. Rep.: MSR-TR- 96-08, 1997.
  • 5王双成,李小琳,侯彩虹.用于因果分析的混合贝叶斯网络结构学习[J].智能系统学报,2007,2(6):82-89. 被引量:6

二级参考文献17

  • 1[10]LAM W,BACCHUS F.Learning Bayesian belief networks:an approach based on the MDL principle[J].Computational Intelligence,1994,10(4):269-293.
  • 2[11]CHICKERING D M.Learning Bayesian networks is NP-Hard[R].MSR-TR-94-17,1994.
  • 3[13]GEMAN S,GEMAN D.Stochastic relaxation,Gibbs distributions and the Bayesian restoration of images[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1984,6(6):721-742.
  • 4[14]CHOW C K,LIU C N.Approximating discrete probability distributions with dependence trees[J].IEEE Transactions on Information Theory,1968,14(3):462-467.
  • 5[15]BUNTINE W L.Chain graphs for learning[A].Proceedings of the 17th Conference Artificial Intelligence[C].San Francisco,USA,1995.
  • 6[16]DOMINGOS P,PAZZANI M.On the optimality of the simple Bayesian classifier under zero-one loss[J].Machine Learning,1997,29(2-3):103-130.
  • 7[20]MURPHY S L,AHA D W.UCI repository of machine learning databases[EB/OL],http://www.ics.uci.edu/~ mlearn/MLRepository,2005-09-10.
  • 8[4]CHICKERING D M.Learning equivalence classes of Bayesian network structures[J].Machine Learning,2002,2(3):445-498.
  • 9[5]HENSON J.Comparing causality principles[J].Studies in History and Philosophy of Modern Physics,2005,36(3):519-543.
  • 10[6]THIESSON B,MEEK C,CHICKERING D,HECKERMAN D.Learning mixtures of Bayesian networks[R].MSR-TR-97-30,1997.

共引文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部