期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Hybrid Bayesian estimation tree learning with discrete and fuzzy labels 被引量:2
1
作者 zengchang qin Tao WAN 《Frontiers of Computer Science》 SCIE EI CSCD 2013年第6期852-863,共12页
Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a bette... Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET. 展开更多
关键词 fuzzy labels label semantics random set probability estimation tree mass assignment linguistic decision tree naive Bayes
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部