期刊文献+

嵌入式VxWorks系统的MPI实现

Implementation of MPI for embedded VxWorks operation system
下载PDF
导出
摘要 MPI基础性研究对于嵌入式并行计算领域具有重大意义,但国内MPI具体实现不多。针对这种情况,提出了一种为嵌入式VxWorks实时操作系统设计的并行编程解决方案,以Linux系统的开源MPICH2实现为参考对象,实现了支持MPI标准的嵌入式系统MPI实现—WMPI,将MPI并行编程平台引入到VxWorks系统中,提出了基于底层通信组件的MPI标准实现方法和技术,搭建了嵌入式MPI并行应用开发平台。实验结果表明,WMPI能够在VxWorks系统环境下正确实现MPI标准,并且性能优越。 Basic research of Message-Passing Interface( MPI) is quit significant in the field of parallel computing on embeded real time systems,however,there is few implementation of MPI in our country at present. In view of this,this paper proposed a parallel programming solution for embedded VxWorks real time operation system. Based on the open source MPI implementation MPICH2 for Linux system,we introduced the MPI implementation-WMPI for embeded real time system, let the MPI parallel programming platform into VxWorks system,provided a new MPI implementation and technology based on low level communication interface,and built an MPI parallel application development platform. The experimental results show that MPI can be implemented on VxWorks system with WMPI,which runs effectively.
作者 芮国俊 王婷
出处 《信息技术》 2016年第4期196-200,共5页 Information Technology
关键词 MPI 嵌入式 VXWORKS 并行计算 MPI embedded system VxWorks parallel computing
  • 相关文献

参考文献9

  • 1Asanovic K,Bodik R,Catanzaro B C,et al.The landscape of parallel computing research:A view from berkeley[R].Technical Report UCB/EECS-2006-183,EECS Department,University of California,Berkeley,2006.
  • 2Message Passing Interface Forum:MPI:A Message-Passing Interface Standard,version 2.2,Sept.2009[C].
  • 3William GROPP,Ewing LUSK.User's Guide for mpich,a Portable Implementation of MPI[R].Jun.1998.
  • 4刘志强,宋君强,卢风顺,赵娟.基于线程的MPI通信加速器技术研究[J].计算机学报,2011,34(1):154-164. 被引量:11
  • 5Rashti M J,Green J,Balaji P,et al.Multi-core and network aware MPI topology functions[M]∥Recent Advances in the Message Passing Interface.Springer Berlin Heidelberg,2011:50-60.
  • 6Traff J L.Implementing the MPI process topology mechanism[C]∥Supercomputing,ACM/IEEE 2002 Conference.IEEE,2002:28.
  • 7VxWorks Application Programmer's Guide 6.4.Wind River Systems[R].Alameda:Wind River Systems,2006.
  • 8David ASHTON.SMPD PMI Wire Protocol Reference Manual Version 0.1[R].Chicago:Mathematics and Computer Science Division,Argonne National Laboratory,2008.
  • 9Moreaud S,Goglin B,Namyst R,et al.Optimizing MPI Communication within large Multicore nodes with Kernel assistance[C]∥Parallel&Distributed Processing,Workshops and Phd Forum(IPDPSW),2010 IEEE International Symposium on.IEEE,2010:1-7.

二级参考文献9

  • 1Chai L, Gao Q, Panda D K. Understanding the impact of multi core architecture in cluster computing: A case study with InteI Dual Core system//Proceedings of the CCGrid'07. Rio de Janeiro, Brazil, 2007:471 -478.
  • 2Tang H, Shen K, Yang T. Program transformation and runtime support for threaded MPI execution on shared memory machines. ACM Transactions on Programming Languages and Systems, 2000, 22(4): 673- 700.
  • 3Demaine E D. A threads only MPI implementation for the development of parallel programs//Proceedings of the Ilth In ternational Symposium on High Performance Computing Sys terns. Winnipeg, Manitoba, Canada, 1997:153-163.
  • 4Prakash S, Bagrodia R. MPI -SIM: Using parallel simulation to evaluate MPI programs//Proceedings of the Winter Simula tion. Los Aamitos, CA, USA, 1998:467- 474.
  • 5Saini S, Naraikin A et al. Early performance evaluation of a Nehalem" cluster using scientific and engineering applications//Proceedings of the SC'09. New York, USA, 2009, Article 21,12 pages.
  • 6Diaz Martin J C, Rico Gallego J A et al. An MPI -1 corn pliant thread based implementation//Proceedings o{ the EuroPVM/ MP1 2009. Berlin, Heidelberg, 2009:327- 328.
  • 7Sade Y, Sagiv S, Shaham R. Optimizing C multithreaded memory management using thread local storage//Proceedings of the CC'05. Berlin, Heidelberg, 2005:137-155.
  • 8Jin H W, Sur S, Chai L, Panda D K. LiMIC: Support for high-performance MPI Intra Node communication on Linux cluster//Proceedings of the ICPP'05. Washington, DC,USA, 2005, 184- 191.
  • 9Moreaud S, Goglin B, Goodell D, Namyst R. Optimizing MPI communication within large multicore nodes with kernel assislance//Proceedings of the Workshop on Communication Ar chitecture for Clusters, held in Conjunction with IPDPS 2010. Atlanta, USA, 2010.

共引文献10

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部