期刊文献+
共找到411篇文章
< 1 2 21 >
每页显示 20 50 100
Blind source separation by weighted K-means clustering 被引量:5
1
作者 Yi Qingming 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第5期882-887,共6页
Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not ... Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not satisfactory. The contribution of the vector x(t) with different modules is theoretically proved to be unequal, and a weighted K-means clustering method is proposed on this grounds. The proposed algorithm is not only as fast as the conventional K-means clustering method, but can also achieve considerably accurate results, which is demonstrated by numerical experiments. 展开更多
关键词 blind source separation underdetermined mixing sparse representation weighted k-means clustering.
下载PDF
Improved k-means clustering algorithm 被引量:16
2
作者 夏士雄 李文超 +2 位作者 周勇 张磊 牛强 《Journal of Southeast University(English Edition)》 EI CAS 2007年第3期435-438,共4页
In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering a... In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering algorithm is proposed. First, the concept of a silhouette coefficient is introduced, and the optimal clustering number Kopt of a data set with unknown class information is confirmed by calculating the silhouette coefficient of objects in clusters under different K values. Then the distribution of the data set is obtained through hierarchical clustering and the initial clustering-centers are confirmed. Finally, the clustering is completed by the traditional k-means clustering. By the theoretical analysis, it is proved that the improved k-means clustering algorithm has proper computational complexity. The experimental results of IRIS testing data set show that the algorithm can distinguish different clusters reasonably and recognize the outliers efficiently, and the entropy generated by the algorithm is lower. 展开更多
关键词 clusterING k-means algorithm silhouette coefficient
下载PDF
An efficient enhanced k-means clustering algorithm 被引量:30
3
作者 FAHIM A.M SALEM A.M +1 位作者 TORKEY F.A RAMADAN M.A 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2006年第10期1626-1633,共8页
In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared dista... In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this paper, we present a simple and efficient clustering algorithm based on the k-means algorithm, which we call enhanced k-means algorithm. This algorithm is easy to implement, requiring a simple data structure to keep some information in each iteration to be used in the next iteration. Our experimental results demonstrated that our scheme can improve the computational speed of the k-means algorithm by the magnitude in the total number of distance calculations and the overall time of computation. 展开更多
关键词 clustering algorithms cluster analysis k-means algorithm Data analysis
下载PDF
Development of slope mass rating system using K-means and fuzzy c-means clustering algorithms 被引量:1
4
作者 Jalali Zakaria 《International Journal of Mining Science and Technology》 SCIE EI CSCD 2016年第6期959-966,共8页
Classification systems such as Slope Mass Rating(SMR) are currently being used to undertake slope stability analysis. In SMR classification system, data is allocated to certain classes based on linguistic and experien... Classification systems such as Slope Mass Rating(SMR) are currently being used to undertake slope stability analysis. In SMR classification system, data is allocated to certain classes based on linguistic and experience-based criteria. In order to eliminate linguistic criteria resulted from experience-based judgments and account for uncertainties in determining class boundaries developed by SMR system,the system classification results were corrected using two clustering algorithms, namely K-means and fuzzy c-means(FCM), for the ratings obtained via continuous and discrete functions. By applying clustering algorithms in SMR classification system, no in-advance experience-based judgment was made on the number of extracted classes in this system, and it was only after all steps of the clustering algorithms were accomplished that new classification scheme was proposed for SMR system under different failure modes based on the ratings obtained via continuous and discrete functions. The results of this study showed that, engineers can achieve more reliable and objective evaluations over slope stability by using SMR system based on the ratings calculated via continuous and discrete functions. 展开更多
关键词 SMR based on continuous functions Slope stability analysis k-means and FCM clustering algorithms Validation of clustering algorithms Sangan iron ore mines
下载PDF
Hybrid Genetic Algorithm with K-Means for Clustering Problems 被引量:1
5
作者 Ahamed Al Malki Mohamed M. Rizk +1 位作者 M. A. El-Shorbagy A. A. Mousa 《Open Journal of Optimization》 2016年第2期71-83,共14页
The K-means method is one of the most widely used clustering methods and has been implemented in many fields of science and technology. One of the major problems of the k-means algorithm is that it may produce empty c... The K-means method is one of the most widely used clustering methods and has been implemented in many fields of science and technology. One of the major problems of the k-means algorithm is that it may produce empty clusters depending on initial center vectors. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary principles of natural selection and genetics. This paper presents a hybrid version of the k-means algorithm with GAs that efficiently eliminates this empty cluster problem. Results of simulation experiments using several data sets prove our claim. 展开更多
关键词 cluster Analysis Genetic algorithm k-means
下载PDF
Similarity matrix-based K-means algorithm for text clustering
6
作者 曹奇敏 郭巧 吴向华 《Journal of Beijing Institute of Technology》 EI CAS 2015年第4期566-572,共7页
K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper propo... K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper proposes an improved K-means algorithm based on the similarity matrix. The im- proved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that the F-measure of the improved K-means algorithm has been greatly improved and the clustering results are more stable. 展开更多
关键词 text clustering k-means algorithm similarity matrix F-MEASURE
下载PDF
Plant Leaf Diseases Classification Using Improved K-Means Clustering and SVM Algorithm for Segmentation
7
作者 Mona Jamjoom Ahmed Elhadad +1 位作者 Hussein Abulkasim Safia Abbas 《Computers, Materials & Continua》 SCIE EI 2023年第7期367-382,共16页
Several pests feed on leaves,stems,bases,and the entire plant,causing plant illnesses.As a result,it is vital to identify and eliminate the disease before causing any damage to plants.Manually detecting plant disease ... Several pests feed on leaves,stems,bases,and the entire plant,causing plant illnesses.As a result,it is vital to identify and eliminate the disease before causing any damage to plants.Manually detecting plant disease and treating it is pretty challenging in this period.Image processing is employed to detect plant disease since it requires much effort and an extended processing period.The main goal of this study is to discover the disease that affects the plants by creating an image processing system that can recognize and classify four different forms of plant diseases,including Phytophthora infestans,Fusarium graminearum,Puccinia graminis,tomato yellow leaf curl.Therefore,this work uses the Support vector machine(SVM)classifier to detect and classify the plant disease using various steps like image acquisition,Pre-processing,Segmentation,feature extraction,and classification.The gray level co-occurrence matrix(GLCM)and the local binary pattern features(LBP)are used to identify the disease-affected portion of the plant leaf.According to experimental data,the proposed technology can correctly detect and diagnose plant sickness with a 97.2 percent accuracy. 展开更多
关键词 SVM machine learning GLCM algorithm k-means clustering LBP
下载PDF
An Improved K-Means Algorithm Based on Initial Clustering Center Optimization
8
作者 LI Taihao NAREN Tuya +2 位作者 ZHOU Jianshe REN Fuji LIU Shupeng 《ZTE Communications》 2017年第B12期43-46,共4页
The K-means algorithm is widely known for its simplicity and fastness in text clustering.However,the selection of the initial clus?tering center with the traditional K-means algorithm is some random,and therefore,the ... The K-means algorithm is widely known for its simplicity and fastness in text clustering.However,the selection of the initial clus?tering center with the traditional K-means algorithm is some random,and therefore,the fluctuations and instability of the clustering results are strongly affected by the initial clustering center.This paper proposed an algorithm to select the initial clustering center to eliminate the uncertainty of central point selection.The experiment results show that the improved K-means clustering algorithm is superior to the traditional algorithm. 展开更多
关键词 clusterING k-means algorithm initial clustering center
下载PDF
A State of Art Analysis of Telecommunication Data by k-Means and k-Medoids Clustering Algorithms
9
作者 T. Velmurugan 《Journal of Computer and Communications》 2018年第1期190-202,共13页
Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-clus... Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application. 展开更多
关键词 k-means algorithm k-Medoids algorithm DATA clusterING Time COMPLEXITY TELECOMMUNICATION DATA
下载PDF
Fault Diagnosis Model Based on Fuzzy Support Vector Machine Combined with Weighted Fuzzy Clustering 被引量:3
10
作者 张俊红 马文朋 +1 位作者 马梁 何振鹏 《Transactions of Tianjin University》 EI CAS 2013年第3期174-181,共8页
A fault diagnosis model is proposed based on fuzzy support vector machine (FSVM) combined with fuzzy clustering (FC).Considering the relationship between the sample point and non-self class,FC algorithm is applied to ... A fault diagnosis model is proposed based on fuzzy support vector machine (FSVM) combined with fuzzy clustering (FC).Considering the relationship between the sample point and non-self class,FC algorithm is applied to generate fuzzy memberships.In the algorithm,sample weights based on a distribution density function of data point and genetic algorithm (GA) are introduced to enhance the performance of FC.Then a multi-class FSVM with radial basis function kernel is established according to directed acyclic graph algorithm,the penalty factor and kernel parameter of which are optimized by GA.Finally,the model is executed for multi-class fault diagnosis of rolling element bearings.The results show that the presented model achieves high performances both in identifying fault types and fault degrees.The performance comparisons of the presented model with SVM and distance-based FSVM for noisy case demonstrate the capacity of dealing with noise and generalization. 展开更多
关键词 FUZZY support VECTOR machine FUZZY clustering SAMPLE weight GENETIC algorithm parameter optimization FAULT diagnosis
下载PDF
Genetic Algorithm Combined with the K-Means Algorithm:A Hybrid Technique for Unsupervised Feature Selection
11
作者 Hachemi Bennaceur Meznah Almutairy Norah Alhussain 《Intelligent Automation & Soft Computing》 SCIE 2023年第9期2687-2706,共20页
The dimensionality of data is increasing very rapidly,which creates challenges for most of the current mining and learning algorithms,such as large memory requirements and high computational costs.The literature inclu... The dimensionality of data is increasing very rapidly,which creates challenges for most of the current mining and learning algorithms,such as large memory requirements and high computational costs.The literature includes much research on feature selection for supervised learning.However,feature selection for unsupervised learning has only recently been studied.Finding the subset of features in unsupervised learning that enhances the performance is challenging since the clusters are indeterminate.This work proposes a hybrid technique for unsupervised feature selection called GAk-MEANS,which combines the genetic algorithm(GA)approach with the classical k-Means algorithm.In the proposed algorithm,a new fitness func-tion is designed in addition to new smart crossover and mutation operators.The effectiveness of this algorithm is demonstrated on various datasets.Fur-thermore,the performance of GAk-MEANS has been compared with other genetic algorithms,such as the genetic algorithm using the Sammon Error Function and the genetic algorithm using the Sum of Squared Error Function.Additionally,the performance of GAk-MEANS is compared with the state-of-the-art statistical unsupervised feature selection techniques.Experimental results show that GAk-MEANS consistently selects subsets of features that result in better classification accuracy compared to others.In particular,GAk-MEANS is able to significantly reduce the size of the subset of selected features by an average of 86.35%(72%–96.14%),which leads to an increase of the accuracy by an average of 3.78%(1.05%–6.32%)compared to using all features.When compared with the genetic algorithm using the Sammon Error Function,GAk-MEANS is able to reduce the size of the subset of selected features by 41.29%on average,improve the accuracy by 5.37%,and reduce the time by 70.71%.When compared with the genetic algorithm using the Sum of Squared Error Function,GAk-MEANS on average is able to reduce the size of the subset of selected features by 15.91%,and improve the accuracy by 9.81%,but the time is increased by a factor of 3.When compared with the machine-learning based methods,we observed that GAk-MEANS is able to increase the accuracy by 13.67%on average with an 88.76%average increase in time. 展开更多
关键词 Genetic algorithm unsupervised feature selection k-means clustering
下载PDF
一种改进的特征加权K-means聚类算法 被引量:12
12
作者 王慧 申石磊 《微电子学与计算机》 CSCD 北大核心 2010年第7期161-163,167,共4页
提出了一种改进的特征加权K-means聚类算法.该算法首先基于数据样本分布选取初始聚类中心,然后设计特征加权的K-means聚类算法.实验结果证明,该算法能产生质量较高的聚类结果,并且能处理数值、符号两类数据.
关键词 聚类 k-means算法 聚类中心 特征加权
下载PDF
基于初始聚类中心优化和维间加权的改进K-means算法 被引量:7
13
作者 王越 王泉 +1 位作者 吕奇峰 曾晶 《重庆理工大学学报(自然科学)》 CAS 2013年第4期77-80,共4页
针对K-means算法易受随机选择的初始聚类中心的影响和划分准确率不高的缺点,给出了一种改进的K-means算法。首先对初始聚类中心的选择过程进行了改进,然后对各样本点间差异最大的维进行加权处理。在Iris数据集上对原始算法和改进后的K-m... 针对K-means算法易受随机选择的初始聚类中心的影响和划分准确率不高的缺点,给出了一种改进的K-means算法。首先对初始聚类中心的选择过程进行了改进,然后对各样本点间差异最大的维进行加权处理。在Iris数据集上对原始算法和改进后的K-means算法的聚类结果进行对比分析。实验证明:改进后的算法稳定,且聚类的准确率达到了92%。 展开更多
关键词 聚类 K—means算法 初始聚类中心 维间加权 Iris数据集
下载PDF
基于属性权重最优化的k-means聚类算法 被引量:10
14
作者 熊平 顾霄 《微电子学与计算机》 CSCD 北大核心 2014年第4期40-43,共4页
聚类是最常用的数据挖掘算法之一.为了提高聚类结果的质量,应用拉格朗日乘数法提出了一种基于属性权重最优化的k-means聚类算法.该算法在计算样本与质心的距离时为各属性赋予相应的权重以表示属性的重要程度,并在每轮迭代中根据质心向... 聚类是最常用的数据挖掘算法之一.为了提高聚类结果的质量,应用拉格朗日乘数法提出了一种基于属性权重最优化的k-means聚类算法.该算法在计算样本与质心的距离时为各属性赋予相应的权重以表示属性的重要程度,并在每轮迭代中根据质心向量的变化自动计算最优的属性权重,使得所有样本与相应质心的距离和最小.实验结果验证了该方法相对于传统k-means算法的优势. 展开更多
关键词 聚类算法 属性权重 数据挖掘 目标函数
下载PDF
K-means聚类中心的鲁棒优化算法 被引量:7
15
作者 罗倩 《计算机工程与设计》 北大核心 2015年第9期2395-2400,共6页
针对K-means算法对随机选择的初始聚类中心敏感且聚类结果不稳定、准确率不高的问题,提出一种基于邻域数据距离加权的聚类中心鲁棒优化算法。通过建立数据密度约束将聚类中心优化在数据密集区域,有效克服K-means算法聚类结果稳定性差等... 针对K-means算法对随机选择的初始聚类中心敏感且聚类结果不稳定、准确率不高的问题,提出一种基于邻域数据距离加权的聚类中心鲁棒优化算法。通过建立数据密度约束将聚类中心优化在数据密集区域,有效克服K-means算法聚类结果稳定性差等问题。通过对仿真数据和标准数据集的实验,验证了采用该算法收敛的聚类中心非常接近标准数据集的实际中心,具有较优的聚类准确性、鲁棒性和收敛速度。 展开更多
关键词 k-means聚类算法 初始聚类中心 邻域距离加权 聚类优化 鲁棒算法
下载PDF
基于逻辑回归函数的加权K-means聚类算法 被引量:8
16
作者 林丽 薛芳 《集美大学学报(自然科学版)》 CAS 2021年第2期139-145,共7页
传统K-means聚类算法通过欧式距离计算样本的相似度,将数据所有的属性特征均平等对待,忽略每个属性特征的不同贡献,导致样本相似度计算的准确率不高。针对这个不足,提出一种特征加权的K-means算法进行优化。首先,运用Softmax和Sigmoid... 传统K-means聚类算法通过欧式距离计算样本的相似度,将数据所有的属性特征均平等对待,忽略每个属性特征的不同贡献,导致样本相似度计算的准确率不高。针对这个不足,提出一种特征加权的K-means算法进行优化。首先,运用Softmax和Sigmoid逻辑回归函数计算特征权重,使得加权的欧式距离更能准确地表示样本相似度;其次,优化初始聚类中心选择策略,选择距离较大的K个样本作为初始聚类中心,可有效避免样本的错误聚类及空簇问题。实验结果表明,在UCI标准数据集中采用加权K-means聚类算法可以有效减少迭代次数,提高聚类的准确率、精确率和召回率。 展开更多
关键词 欧式距离 特征加权的k-means算法 逻辑回归函数 初始聚类中心
下载PDF
基于改进粒子群和K-Means的文本聚类算法研究 被引量:8
17
作者 钮永莉 武斌 《兰州文理学院学报(自然科学版)》 2019年第4期44-47,共4页
在进行文本聚类时,对于大容量、高维、非结构化的文本数据,单纯的K-Means聚类效果不佳,容易陷入局部最优解.本文改进了粒子群优化算法,提出了非线性动态调整惯性权重机制,并将改进后的粒子群算法与局部搜索能力较强的K-Means算法相结合... 在进行文本聚类时,对于大容量、高维、非结构化的文本数据,单纯的K-Means聚类效果不佳,容易陷入局部最优解.本文改进了粒子群优化算法,提出了非线性动态调整惯性权重机制,并将改进后的粒子群算法与局部搜索能力较强的K-Means算法相结合,形成基于改进粒子群和K-Means的文本聚类算法(MPK-Clusters).3种算法的实验对比结果表明,新算法在准确率、召回率和F值方面都优于其他两种算法,取得了更好的文本聚类效果. 展开更多
关键词 k-means算法 MPK-clusters算法 PSO-KMeans算法 惯性权重
下载PDF
基于权重的改进K-means算法应用研究 被引量:1
18
作者 宗春梅 郝耀军 焦莉娟 《高师理科学刊》 2017年第11期24-29,共6页
数据聚类是将数据对象划分到不同的类或簇中,是数据挖掘中的一项重要技术.教育领域拥有海量的学生信息数据,把数据挖掘中的聚类技术引入其中,具有很强的实际价值.阐述了运用数据挖掘中改进的引入权重的聚类技术对成绩数据进行选择、预... 数据聚类是将数据对象划分到不同的类或簇中,是数据挖掘中的一项重要技术.教育领域拥有海量的学生信息数据,把数据挖掘中的聚类技术引入其中,具有很强的实际价值.阐述了运用数据挖掘中改进的引入权重的聚类技术对成绩数据进行选择、预处理和挖掘分析等,展示了3个Matlab实验使成绩数据如何通过K-means算法进行聚类分析,并对3种运行结果的意义各自进行了显示与分析,同时指出了运行结果的不足及意义.针对学生实验中的分类原因进行了研究并在学生成绩分析中发现很多隐含着的不易发现的有价值信息,利用这些聚类结果提出了相应的教学措施及建议,从而有针对性地提高教学质量. 展开更多
关键词 k-means算法 聚类分析 权重
下载PDF
基于萤火虫优化的加权K-means算法 被引量:43
19
作者 陈小雪 尉永清 +1 位作者 任敏 孟媛媛 《计算机应用研究》 CSCD 北大核心 2018年第2期466-470,共5页
针对传统K-means算法易受初始聚类中心和异常数据的影响等缺陷,利用萤火虫优化算法全局搜索能力强、收敛速度快的优势,对K-means算法的初始聚类中心进行优化,并通过引用一种加权的欧氏距离,减少异常数据等不确定因素带来的不良影响,提... 针对传统K-means算法易受初始聚类中心和异常数据的影响等缺陷,利用萤火虫优化算法全局搜索能力强、收敛速度快的优势,对K-means算法的初始聚类中心进行优化,并通过引用一种加权的欧氏距离,减少异常数据等不确定因素带来的不良影响,提出了一种基于萤火虫优化的加权K-means算法。该算法在提升聚类性能的同时,有效增强了算法的收敛速度。在实验阶段,通过UCI数据集中的几组数据对该算法进行了聚类实验及有效性测试,实验结果充分表明了该算法的有效性及优越性。 展开更多
关键词 加权k-means 聚类 萤火虫算法
下载PDF
加权局部方差优化初始簇中心的K-means算法 被引量:11
20
作者 蔡宇浩 梁永全 +2 位作者 樊建聪 李璇 刘文华 《计算机科学与探索》 CSCD 北大核心 2016年第5期732-741,共10页
在传统K-means算法中,初始簇中心选择的随机性,导致聚类结果随不同的聚类中心而不同。因此出现了很多簇中心的选择方法,但是很多已有的簇中心选择算法,其聚类结果受参数调节的影响较大。针对这一问题,提出了一种新的初始簇中心选择算法... 在传统K-means算法中,初始簇中心选择的随机性,导致聚类结果随不同的聚类中心而不同。因此出现了很多簇中心的选择方法,但是很多已有的簇中心选择算法,其聚类结果受参数调节的影响较大。针对这一问题,提出了一种新的初始簇中心选择算法,称为WLV-K-means(weighted local variance K-means)。该算法采用加权局部方差度量样本的密度,以更好地发现密度高的样本,并利用改进的最大最小法,启发式地选择簇初始中心点。在UCI数据集上的实验结果表明,WLV-K-means算法不仅能够取得较好的聚类结果,而且受参数变化的影响较小,有更加稳定的表现。 展开更多
关键词 k-means算法 方差 加权 最大最小法 簇初始中心点
下载PDF
上一页 1 2 21 下一页 到第
使用帮助 返回顶部