摘要
为了快速挖掘大规模空间数据的聚集特性,在cluster_dp密度聚类算法基础上,提出了一种基于弹性分布数据集的并行密度聚类方法 PClusterdp.首先,设计一种能平衡工作负载弹性分布数据集分区方法,根据数据在空间的分布情况,自动划分网格并分配数据,使得网格内数据量相对均衡,达到平衡运算节点负载的目的;接着,提出一种适用于并行计算的局部密度定义,并改进聚类中心的计算方式,解决了原始算法需要通过绘制决策图判断聚类中心对象的缺陷;最后,通过网格内及网格间聚簇合并等优化策略,实现了大规模空间数据的快速聚类处理.实验结果表明,借助Spark数据处理平台编程实现算法,本方法可以有效实现大规模空间数据的快速聚类,与传统的密度聚类方法相比具有较高的精确度与更好的系统处理性能.
This paper proposed a density based parallel clustering algorithm to mine the feature of large scale spatial data.The proposed PClusterdp algorithm is based on the cluster-dp algorithm.First,we in-troduced a data object count based RDD partition algorithm for balancing the working load of each compute node in computing cluster.Second,we redefined the local density for each data point to suit the parallel computing.Meanwhile,in order to get rid of original algorithm's decision graph,we proposed a method to automatically determine the center point for each cluster.Finally,we discussed the cluster merge strata-gem to combine the partially clustered data together to generate the final clustering result.We implemen-ted our Resilient Distributed Dataset (RDD)based algorithm on Spark.The experiment result shows that the proposed algorithm can cluster large scale spatial data effectively,and meanwhile,the method has bet-ter performance than the traditional density clustering methods and can achieve the rapid clustering of mas-sive spatial data.
出处
《湖南大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2015年第8期116-124,共9页
Journal of Hunan University:Natural Sciences
基金
国家自然科学基金资助项目(61304199)
长沙理工大学特殊道路工程湖南省重点实验室开发基金资助项目~~