期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A Novel Minkowski-distance-based Consensus Clustering Algorithm
1
作者 De-Gang Xu Pan-Lei Zhao +2 位作者 Chun-Hua Yang Wei-Hua Gui Jian-Jun He 《International Journal of Automation and computing》 EI CSCD 2017年第1期33-44,共12页
Consensus clustering is the problem of coordinating clustering information about the same data set coming from different runs of the same algorithm. Consensus clustering is becoming a state-of-the-art approach in an i... Consensus clustering is the problem of coordinating clustering information about the same data set coming from different runs of the same algorithm. Consensus clustering is becoming a state-of-the-art approach in an increasing number of applications. However, determining the optimal cluster number is still an open problem. In this paper, we propose a novel consensus clustering algorithm that is based on the Minkowski distance. Fusing with the Newman greedy algorithm in complex networks, the proposed clustering algorithm can automatically set the number of clusters. It is less sensitive to noise and can integrate solutions from multiple samples of data or attributes for processing data in the processing industry. A numerical simulation is also given to demonstrate the effectiveness of the proposed algorithm. Finally, this consensus clustering algorithm is applied to a froth flotation process. 展开更多
关键词 minkowski distance consensus clustering similarity matrix process data froth flotation.
原文传递
Adapt Bagging to Nearest Neighbor Classifiers 被引量:7
2
作者 Zhi-HuaZhou YangYu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2005年第1期48-54,共7页
It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component l... It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers. 展开更多
关键词 BAGGING data mining ensemble learning machine learning Minkowsky distance nearest neighbor value difference metric
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部