Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in...Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.展开更多
随着大数据时代的到来,面对数据量剧增,传统的聚类算法将面临极大的挑战.为了提高聚类算法的效率,本文基于Hadoop平台设计与实现了并行化的Partitioning Around Medoid聚类算法,并从优化聚类单元和聚类中心的角度,结合视觉聚类的核心思...随着大数据时代的到来,面对数据量剧增,传统的聚类算法将面临极大的挑战.为了提高聚类算法的效率,本文基于Hadoop平台设计与实现了并行化的Partitioning Around Medoid聚类算法,并从优化聚类单元和聚类中心的角度,结合视觉聚类的核心思想提出了粗粒度聚类单元策略(Coarse-Grained Clustering Unit Strategy).通过多组实验比较,结果表明,在粗粒度聚类单元策略的优化下算法在运行效率,计算能力等方面提高6%以上,所实现的并行算法具有良好的加速比,扩展比和伸缩率.研究结果为以后的大数据集下的聚类分析奠定了基础.展开更多
文摘Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.
文摘随着大数据时代的到来,面对数据量剧增,传统的聚类算法将面临极大的挑战.为了提高聚类算法的效率,本文基于Hadoop平台设计与实现了并行化的Partitioning Around Medoid聚类算法,并从优化聚类单元和聚类中心的角度,结合视觉聚类的核心思想提出了粗粒度聚类单元策略(Coarse-Grained Clustering Unit Strategy).通过多组实验比较,结果表明,在粗粒度聚类单元策略的优化下算法在运行效率,计算能力等方面提高6%以上,所实现的并行算法具有良好的加速比,扩展比和伸缩率.研究结果为以后的大数据集下的聚类分析奠定了基础.