期刊文献+

Hadoop环境下基于数据本地化的Reduce任务调度策略 被引量:1

Scheduling Strategy of Reduce Task Based on Data Localization in Hadoop
下载PDF
导出
摘要 在MapReduce模型任务处理过程中,当Reduce任务开始执行,远程拉取Map阶段的输出数据时,会消耗大量的网络带宽,甚至会出现网络瓶颈问题。本文提出基于数据本地化和负载均衡的任务分配策略。该策略中用户首先设置采样数据量M,在Map阶段对前M个数据块进行采样;其次根据采样结果,同时考虑数据本地化因素,将Reduce任务进行分配;然后基于负载均衡将Reduce任务进行再分配,通过任务分配,系统生成一个任务分配表;最后启动Reduce任务,系统开始数据拉取,未被采样的数据根据任务分配表进行任务分配。通过大量实验验证,基于数据本地化和负载均衡的任务分配策略,既能减少Shuffle阶段数据的传输量,又能降低网络带宽的消耗,同时可以避免出现某些节点空闲而其它节点任务量大甚至处理不了的情况,从而提高了集群处理数据的整体能力。 In the MapReduce task processing, when Reduce task is executed, and the data need to be pulled in the Map stage, it will cost a large amount of network bandwidth, network bottlenecks will occur even. Therefore, we propose a task allocation strategy based on localization and load balance. First of all, the user sets the sampling variable M. The Map function is executed in Map stages, and we select the first M data blocks for sampling. Next, the system assigns tasks by considering the data localiza- tion and the sample results. Once again, the system assigns tasks by considering the load balance. The system will generate a task allocation table after the task allocation based on the data localization and the load balance. Finally, the system executes the Reduce task and begins to pull data. Subsequent tasks are assigned based on the task allocation table. Through experimental veri- fication, assigning task based on the data localization and the load balance can not only reduce the transmission of data and the network bandwidth consumption in Shuffle stage, but also it can avoid the situation that there are many tasks on some nodes and there are no tasks on other nodes. So the strategy can improve the overall capacity of the data processing.
作者 王浩
出处 《计算机与现代化》 2016年第1期114-120,共7页 Computer and Modernization
基金 重庆市科技计划项目(cstc2013jcsf10034)
关键词 采样 MAPREDUCE 本地化 任务分配 负载均衡 sampling MapReduce localization task allocation load balance
  • 相关文献

参考文献16

  • 1Dean J, Ghematat S. MapReduce: Simplified data processing on large clusters[C]// Proceedings of the 6th Conference on Symposium on Opearting Systems Design & Implementation. 2004:10.
  • 2Thusoo A, Sarma J S, Jain N, et al. Hive: A warehousing solution over a map-reduce framework[J]. Proceedings of the VLDB Endowment, 2009,2(2):1626-1629.
  • 3Lin Yuting, Agrawal D, Chen Chun, et al. Llama: Leveraging columnar storage for scalable join processing in the MapReduce framework[C]// Proceedings of the 2011 ACM Conference on Management of Data. 2011:961-972.
  • 4The Apache Software Foundation. What Is Apache Hadoop? [DB/OL]. http://hadoop.apache.org/, 2015-09-30.
  • 5Ghemawat S, Gobioff H, Leung Shun-Tak. The Google file system[C]// Proceedings of the 19th ACM Symposium on Operating Systems Principles. 2003:29-43.
  • 6Kaldewey T, Shekita E J, Tata S. Clydesdale: Structured data processing on MapReduce[C]// Proceedings of the 15th International Conference on Extending Database Technology. 2015:15-25.
  • 7TPC. TPC-H[DB/OL]. http://www.tpc.org/tpch/, 2015-9-30.
  • 8Chang Fay, Dean J, Ghemawat S, et al. Bigtable: A distributed storage system for structured data[J]. ACM Transactions on Computer Systems, 2008,26(2):205-218.
  • 9Chen Shih-ying, Chen Po-chun. An efficient join query processing based on MJR framework, software engineering[C]// Proceedings of the 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing. 2012:698-703.
  • 10Mackey G, Sehrish S, Bent J, et al. Introducing Map-Reduce to high end computing[C]// Proceedings of Petascale Data Storage Workshop, 2008. 2008:1-6.

同被引文献10

引证文献1

二级引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部