期刊文献+

Spark任务间消息传递方法研究 被引量:2

Exploring Message Passing Method Between Spark Tasks
下载PDF
导出
摘要 当今诸多工程问题及科学研究中,都面临着大数据处理和高性能计算任务的双重挑战。基于内存计算技术提出的分布式处理框架Spark已在学术和工业界得到了广泛的应用,但其MapReduce-like的编程模型在任务间无法进行通信,导致科学计算中的数值算法无法进行高效实现。针对上述问题,研究了一种Spark内存计算与MPI消息传递模型相结合的解决方案,充分利用内存访问存取快速的特点和MPI的多种高性能通信机制,解决了Spark编程模型表达能力不足的缺陷,同时为MPI提供了面向数据的DAG计算方式。通过对Spark内部的运行环境和调度系统进行修改,使得MPI在Spark中得以无缝融合,为高性能计算和大数据任务提供了一个统一的内存计算系统。测试结果表明,在数值计算和迭代算法上相比Spark至少有50%的性能提升。 Engineering problems and scientific research are facing dual challenges of big data processing and highperformance computing tasks.Spark,a distributed processing framework based on in-memory computing technology,has been widely used in academia and industry.However,its MapReduce-like programming model fails to communicate between tasks,causing numerical algorithms in scientific computing cannot be efficiently implemented.In response to the above problems,a computing system is proposed in this paper that combines Spark in-memory computing model with MPI message passing,which takes full advantage of the fast speed of memory access and multiple high performance communication mechanisms of MPI.It can not only supplement the insufficient expressiveness of the Spark programming model,but also provide a data-oriented DAG computation method for MPI.Internal runtime environment and scheduling strategy of Spark are modified to seamlessly integrate MPI into Spark to provide a unified in-memory computing system for high-performance computing and big data processing tasks.The tests indicate that the performance of numerical computation and iterative algorithm is improved by at least 50%compared with Spark.
作者 夏立斌 刘晓宇 孙玮 姜晓巍 孙功星 XIA Libin;LIU Xiaoyu;SUN Wei;JIANG Xiaowei;SUN Gongxing(Institute of High Energy Physics,Chinese Academy of Sciences,Beijing 100049,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处 《计算机工程与应用》 CSCD 北大核心 2022年第21期91-97,共7页 Computer Engineering and Applications
基金 国家自然科学基金(12275295,11775249)。
关键词 SPARK MPI 科学计算 内存计算 迭代算法 Spark MPI scientific computing in-memory computing iterative algorithm
  • 相关文献

参考文献3

二级参考文献30

  • 1Page L, Brin S, Motwani R, Winograd T. The Pa.geRank citation ranking: Bringing order to the web, Technical Re- port, 1999-66, Stanford InfoLab, Nov. 1999.
  • 2MacQueen J. Some methods for classification and analysis of multivariate observations. In Proe. the 5th Berkeley Sym- posium on Mathematical Statistics and Probability, 1967, pp.281-297.
  • 3Dean J, Ghemawat S. Mapeduce: Simplified data process- ing on large clusters. Communications of the ACM, 2008, 51(1):107-113.
  • 4Lam C. Hadoop in Action. New Jersey, USA: Manning Pub- lications Co., 2010.
  • 5Zaharia M, Chowdhury M, Das T, Dave A, Ma J, McCauley M, Franklin M, Shenker S, Stoica I. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proc. the 9th USENIX Symposium on Net- worked Systems Design and Implementation (NSDI2012), April 2012, pp.2:1-2:14.
  • 6Lu X, Islam N S, Wasi-Ur-Rahman M, Jose J, Subramoni H, Wang H, Panda D K. High-performance design of Hadoop RPC with RDMA over InfiniBand. In Proc. the 42nd In- tewational Conference on Parallel Pvcessing ( ICPP2013), October 2013, pp.641-650.
  • 7Rahman M, Islam N, Lu X, Jose J, Subramoni H, Wang H, Panda D. High-performance RDMA-based design of Hadoop MapReduce over InfiniBand. In Proc. the 27th In- ternational Symposium on Parallel and Distributed Pro- cessing Workshops and PhD Forum (1PDPSW2013), May 2013, pp.1908-1917.
  • 8Rahman M, Lu X, Islam N S, Panda D K D. HOMR: A hybrid approach to exploit maximum overlapping in MapReduce over high performance interconnects. In Proc. the 28th ACM International Conference on Supercomput- ing (ICS2014), December 2014, pp.33-42.
  • 9Lu X, Rahman M, Islam N, Shankar D, Panda D K. Ac- celerating spark with RDMA for big data processing: Early experiences. In Proc. the 22nd Annual Symposium on High- Perforaance Interconnects (HOTI2014), August 2014, pp. 9-16.
  • 10Plimpton S J, Devine K D. MapReduce in MPI for large- scale graph algorithms. Parallel Comput., 2011, 37(9): 610- 632.

共引文献7

同被引文献11

引证文献2

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部