DBSCAN算法是基于密度的聚类算法,可在有噪声点的数据集中发现任意形状类簇,得到广泛应用。但其存在大规模磁盘I/O导致计算速度慢,密度不均匀类簇和人工干预确定阈值导致聚类偏差等缺陷,基于此提出Spark内存迭代并行化SDKB-DBSCAN(Spark...DBSCAN算法是基于密度的聚类算法,可在有噪声点的数据集中发现任意形状类簇,得到广泛应用。但其存在大规模磁盘I/O导致计算速度慢,密度不均匀类簇和人工干预确定阈值导致聚类偏差等缺陷,基于此提出Spark内存迭代并行化SDKB-DBSCAN(Spark Density Division Kernel Density Estimation Boundary Stategy-Density-based Spatial Clustering of Applications with Noise)改进算法,设计Spark缓存机制结合不规则动态分区和边界合并以及核密度估计并行化。实验表明,改进算法一般适用不同形状类簇和较大规模数据聚类,在准确率和计算速率上有一定提升。展开更多
The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parall...The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parallel naive Bayes algorithm(PNBA)for Chinese text classification based on Spark,a parallel memory computing platform for big data.This algorithm has implemented parallel operation throughout the entire training and prediction process of naive Bayes classifier mainly by adopting the programming model of resilient distributed datasets(RDD).For comparison,a PNBA based on Hadoop is also implemented.The test results show that in the same computing environment and for the same text sets,the Spark PNBA is obviously superior to the Hadoop PNBA in terms of key indicators such as speedup ratio and scalability.Therefore,Spark-based parallel algorithms can better meet the requirement of large-scale Chinese text data mining.展开更多
文摘DBSCAN算法是基于密度的聚类算法,可在有噪声点的数据集中发现任意形状类簇,得到广泛应用。但其存在大规模磁盘I/O导致计算速度慢,密度不均匀类簇和人工干预确定阈值导致聚类偏差等缺陷,基于此提出Spark内存迭代并行化SDKB-DBSCAN(Spark Density Division Kernel Density Estimation Boundary Stategy-Density-based Spatial Clustering of Applications with Noise)改进算法,设计Spark缓存机制结合不规则动态分区和边界合并以及核密度估计并行化。实验表明,改进算法一般适用不同形状类簇和较大规模数据聚类,在准确率和计算速率上有一定提升。
基金Project(KC18071)supported by the Application Foundation Research Program of Xuzhou,ChinaProjects(2017YFC0804401,2017YFC0804409)supported by the National Key R&D Program of China
文摘The sharp increase of the amount of Internet Chinese text data has significantly prolonged the processing time of classification on these data.In order to solve this problem,this paper proposes and implements a parallel naive Bayes algorithm(PNBA)for Chinese text classification based on Spark,a parallel memory computing platform for big data.This algorithm has implemented parallel operation throughout the entire training and prediction process of naive Bayes classifier mainly by adopting the programming model of resilient distributed datasets(RDD).For comparison,a PNBA based on Hadoop is also implemented.The test results show that in the same computing environment and for the same text sets,the Spark PNBA is obviously superior to the Hadoop PNBA in terms of key indicators such as speedup ratio and scalability.Therefore,Spark-based parallel algorithms can better meet the requirement of large-scale Chinese text data mining.