The K-means algorithm is widely known for its simplicity and fastness in text clustering.However,the selection of the initial clus?tering center with the traditional K-means algorithm is some random,and therefore,the ...The K-means algorithm is widely known for its simplicity and fastness in text clustering.However,the selection of the initial clus?tering center with the traditional K-means algorithm is some random,and therefore,the fluctuations and instability of the clustering results are strongly affected by the initial clustering center.This paper proposed an algorithm to select the initial clustering center to eliminate the uncertainty of central point selection.The experiment results show that the improved K-means clustering algorithm is superior to the traditional algorithm.展开更多
Though K-means is very popular for general clustering, its performance, which generally converges to numerous local minima, depends highly on initial cluster centers. In this paper a novel initialization scheme to sel...Though K-means is very popular for general clustering, its performance, which generally converges to numerous local minima, depends highly on initial cluster centers. In this paper a novel initialization scheme to select initial cluster centers for K-means clustering is proposed. This algorithm is based on reverse nearest neighbor (RNN) search which retrieves all points in a given data set whose nearest neighbor is a given query point. The initial cluster centers computed using this methodology are found to be very close to the desired cluster centers for iterative clustering algorithms. This procedure is applicable to clustering algorithms for continuous data. The application of the proposed algorithm to K-means clustering algorithm is demonstrated. An experiment is carried out on several popular datasets and the results show the advantages of the proposed method.展开更多
Consensus clustering aims to fuse several existing basic partitions into an integrated one; this has been widely recognized as a promising tool for multi-source and heterogeneous data clustering. Owing to robust and h...Consensus clustering aims to fuse several existing basic partitions into an integrated one; this has been widely recognized as a promising tool for multi-source and heterogeneous data clustering. Owing to robust and high-quality performance over traditional clustering methods, consensus clustering attracts much attention, and much efforts have been devoted to develop this field. In the literature, the K-means-based Consensus Clustering(KCC) transforms the consensus clustering problem into a classical K-means clustering with theoretical supports and shows the advantages over the state-of-the-art methods. Although KCC inherits the merits from K-means,it suffers from the initialization sensitivity. Moreover, the current consensus clustering framework separates the basic partition generation and fusion into two disconnected parts. To solve the above two challenges, a novel clustering algorithm, named Greedy optimization of K-means-based Consensus Clustering(GKCC) is proposed.Inspired by the well-known greedy K-means that aims to solve the sensitivity of K-means initialization, GKCC seamlessly combines greedy K-means and KCC together, achieves the merits inherited by GKCC and overcomes the drawbacks of the precursors. Moreover, a 59-sampling strategy is conducted to provide high-quality basic partitions and accelerate the algorithmic speed. Extensive experiments on 36 benchmark datasets demonstrate the significant advantages of GKCC over KCC and KCC++ in terms of the objective function values and standard deviations and external cluster validity.展开更多
Clustering approaches are one of the probabilistic load flow(PLF)methods in distribution networks that can be used to obtain output random variables,with much less computation burden and time than the Monte Carlo simu...Clustering approaches are one of the probabilistic load flow(PLF)methods in distribution networks that can be used to obtain output random variables,with much less computation burden and time than the Monte Carlo simulation(MCS)method.However,a challenge of the clustering methods is that the statistical characteristics of the output random variables are obtained with low accuracy.This paper presents a hybrid approach based on clustering and Point estimate methods.In the proposed approach,first,the sample points are clustered based on the𝑙-means method and the optimal agent of each cluster is determined.Then,for each member of the population of agents,the deterministic load flow calculations are performed,and the output variables are calculated.Afterward,a Point estimate-based PLF is performed and the mean and the standard deviation of the output variables are obtained.Finally,the statistical data of each output random variable are modified using the Point estimate method.The use of the proposed method makes it possible to obtain the statistical properties of output random variables such as mean,standard deviation and probabilistic functions,with high accuracy and without significantly increasing the burden of calculations.In order to confirm the consistency and efficiency of the proposed method,the 10-,33-,69-,85-,and 118-bus standard distribution networks have been simulated using coding in Python®programming language.In simulation studies,the results of the proposed method have been compared with the results obtained from the clustering method as well as the MCS method,as a criterion.展开更多
为了改善传统K-means算法在聚类过程中,聚类数目K难以准确预设,聚类结果受初始中心影响,对噪声点敏感,不稳定等缺点,同时针对文本聚类中文本向量化后数据维数较高,空间分布稀疏,存在潜在语义结构等问题,提出了一种利用奇异值分解(Singul...为了改善传统K-means算法在聚类过程中,聚类数目K难以准确预设,聚类结果受初始中心影响,对噪声点敏感,不稳定等缺点,同时针对文本聚类中文本向量化后数据维数较高,空间分布稀疏,存在潜在语义结构等问题,提出了一种利用奇异值分解(Singular Value Decomposition, SVD)的物理意义进行粗糙分类,再结合K-means算法的中文文本聚类优化算法(SVD-Kmeans)。新算法利用SVD分解的数学意义对文本数据进行了平滑处理,同时利用SVD分解的物理意义对文本数据进行粗糙分类,将分类的结果作为K-means算法的初始聚类中心点。实验结果表明,相比其他K-means及其改进算法,SVD-Kmeans算法的聚类质量F-Measure值有明显提升。展开更多
针对现有的K-Means算法K值需要人工赋值、随机选取初始中心点、文本表示维度高且缺乏语义的缺陷,提出了一种基于概念格的K-Means算法——K-MeansBCC(K-means algorithm based on concept lattice)。将文本集经预处理转化为形式背景,在...针对现有的K-Means算法K值需要人工赋值、随机选取初始中心点、文本表示维度高且缺乏语义的缺陷,提出了一种基于概念格的K-Means算法——K-MeansBCC(K-means algorithm based on concept lattice)。将文本集经预处理转化为形式背景,在此基础上生成概念格;利用概念格中的概念表示文本,根据文本中概念的权重确定K值、选取初始中心点。最后设计了文本间的概念相似度计算公式,并由K-Means算法产生聚类结果。实验结果表明,该算法提高了聚类的效率和准确性。展开更多
文摘The K-means algorithm is widely known for its simplicity and fastness in text clustering.However,the selection of the initial clus?tering center with the traditional K-means algorithm is some random,and therefore,the fluctuations and instability of the clustering results are strongly affected by the initial clustering center.This paper proposed an algorithm to select the initial clustering center to eliminate the uncertainty of central point selection.The experiment results show that the improved K-means clustering algorithm is superior to the traditional algorithm.
基金Supported by the National Natural Science Foundation of China (60503020, 60503033, 60703086)the Natural Science Foundation of Jiangsu Province (BK2006094)+1 种基金the Opening Foundation of Jiangsu Key Labo-ratory of Computer Information Processing Technology in Soochow University ( KJS0714)the Research Foundation of Nanjing University of Posts and Telecommunications (NY207052, NY207082)
文摘Though K-means is very popular for general clustering, its performance, which generally converges to numerous local minima, depends highly on initial cluster centers. In this paper a novel initialization scheme to select initial cluster centers for K-means clustering is proposed. This algorithm is based on reverse nearest neighbor (RNN) search which retrieves all points in a given data set whose nearest neighbor is a given query point. The initial cluster centers computed using this methodology are found to be very close to the desired cluster centers for iterative clustering algorithms. This procedure is applicable to clustering algorithms for continuous data. The application of the proposed algorithm to K-means clustering algorithm is demonstrated. An experiment is carried out on several popular datasets and the results show the advantages of the proposed method.
基金supported in part by the National Natural Science Foundation of China (No. 71471009)
文摘Consensus clustering aims to fuse several existing basic partitions into an integrated one; this has been widely recognized as a promising tool for multi-source and heterogeneous data clustering. Owing to robust and high-quality performance over traditional clustering methods, consensus clustering attracts much attention, and much efforts have been devoted to develop this field. In the literature, the K-means-based Consensus Clustering(KCC) transforms the consensus clustering problem into a classical K-means clustering with theoretical supports and shows the advantages over the state-of-the-art methods. Although KCC inherits the merits from K-means,it suffers from the initialization sensitivity. Moreover, the current consensus clustering framework separates the basic partition generation and fusion into two disconnected parts. To solve the above two challenges, a novel clustering algorithm, named Greedy optimization of K-means-based Consensus Clustering(GKCC) is proposed.Inspired by the well-known greedy K-means that aims to solve the sensitivity of K-means initialization, GKCC seamlessly combines greedy K-means and KCC together, achieves the merits inherited by GKCC and overcomes the drawbacks of the precursors. Moreover, a 59-sampling strategy is conducted to provide high-quality basic partitions and accelerate the algorithmic speed. Extensive experiments on 36 benchmark datasets demonstrate the significant advantages of GKCC over KCC and KCC++ in terms of the objective function values and standard deviations and external cluster validity.
文摘Clustering approaches are one of the probabilistic load flow(PLF)methods in distribution networks that can be used to obtain output random variables,with much less computation burden and time than the Monte Carlo simulation(MCS)method.However,a challenge of the clustering methods is that the statistical characteristics of the output random variables are obtained with low accuracy.This paper presents a hybrid approach based on clustering and Point estimate methods.In the proposed approach,first,the sample points are clustered based on the𝑙-means method and the optimal agent of each cluster is determined.Then,for each member of the population of agents,the deterministic load flow calculations are performed,and the output variables are calculated.Afterward,a Point estimate-based PLF is performed and the mean and the standard deviation of the output variables are obtained.Finally,the statistical data of each output random variable are modified using the Point estimate method.The use of the proposed method makes it possible to obtain the statistical properties of output random variables such as mean,standard deviation and probabilistic functions,with high accuracy and without significantly increasing the burden of calculations.In order to confirm the consistency and efficiency of the proposed method,the 10-,33-,69-,85-,and 118-bus standard distribution networks have been simulated using coding in Python®programming language.In simulation studies,the results of the proposed method have been compared with the results obtained from the clustering method as well as the MCS method,as a criterion.
文摘为了改善传统K-means算法在聚类过程中,聚类数目K难以准确预设,聚类结果受初始中心影响,对噪声点敏感,不稳定等缺点,同时针对文本聚类中文本向量化后数据维数较高,空间分布稀疏,存在潜在语义结构等问题,提出了一种利用奇异值分解(Singular Value Decomposition, SVD)的物理意义进行粗糙分类,再结合K-means算法的中文文本聚类优化算法(SVD-Kmeans)。新算法利用SVD分解的数学意义对文本数据进行了平滑处理,同时利用SVD分解的物理意义对文本数据进行粗糙分类,将分类的结果作为K-means算法的初始聚类中心点。实验结果表明,相比其他K-means及其改进算法,SVD-Kmeans算法的聚类质量F-Measure值有明显提升。
文摘针对现有的K-Means算法K值需要人工赋值、随机选取初始中心点、文本表示维度高且缺乏语义的缺陷,提出了一种基于概念格的K-Means算法——K-MeansBCC(K-means algorithm based on concept lattice)。将文本集经预处理转化为形式背景,在此基础上生成概念格;利用概念格中的概念表示文本,根据文本中概念的权重确定K值、选取初始中心点。最后设计了文本间的概念相似度计算公式,并由K-Means算法产生聚类结果。实验结果表明,该算法提高了聚类的效率和准确性。