期刊文献+

基于MPI并行计算的信号稀疏分解 被引量:2

Signal Sparse Decomposition Based on MPI Parallel Computing
下载PDF
导出
摘要 在研究信号稀疏分解理论及其最常用的匹配追踪算法的基础上,针对MP算法存在的计算量过大的问题,提出一种基于并行计算系统实现信号稀疏分解的方法。该方法利用8台微机,采用MPI消息传递机制,以100 M高速以太网作为互联网络,构建了一套Beowulf并行计算系统,在此系统上通过编制并行程序来实现MP算法。实际测试表明这种方法具有很高的并行计算效率,分解时间从单机75 min左右下降到8机并行11 min左右,大大提高了信号稀疏分解的速度。 After studying Matching Pursuit(MP) algorithm of signal sparse decomposition,this paper proposes a new approach to improve the speed of MP algorithm,and it describes how to build a Beowulf parallel computing system with 8 PCs.Its parallel computation is implemented by Message-Passing-Interface(MPI),and a 100Mb/s high speed Ethernet network interconnects all PCs.Test is made using parallel computing program to measure the parallel efficiency of the system,results show that this parallel can reduce the MP algorithm computing time-cost from 75 minutes with a PC to 11 minutes with 8 PCs.
出处 《计算机工程》 CAS CSCD 北大核心 2008年第12期19-21,共3页 Computer Engineering
基金 国家自然科学基金资助项目(60602043) 四川省应用基础研究基金资助项目(2006J13-114,04JY029-05)
关键词 稀疏分解 匹配追踪 并行计算 MPI消息传递 sparse decomposition Matching Pursuit(MP) parallel computing MPI message passing
  • 相关文献

参考文献4

  • 1Mallat S, Zhang Z. Matching Pursuit with Time-frequency Dictionaries[J]. IEEE Trans, on Signal Processing, 1993, 41(12): 3397-3415.
  • 2Arthur P L, Philipos C L. Voiced/Unvoiced Speech Discrimination in Noise Using Gabor Atomic Decomposition[C]//Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing. Hong Kong, China: [s. n.], 2003: 820-828.
  • 3Huber P J, Project Pursuit[J]. The Annals of Stalislics, 1985, 13(2): 435-475.
  • 4黎康保,陶文正,许丽华,黎文楼.用PC机群组构并行超级计算机[J].计算机工程,2000,26(9):1-3. 被引量:17

二级参考文献4

  • 1[1]California Institute of Technology, CESDIS, (Goddard Space Flight Center, Emory University. How to Build a Beowulf. CCC'97,1997
  • 2[2]Burns G,Daoud D, Vaigl J. LAM: An Open Cluster Environment for MPI. Ohio Super Computer Center, Columbus, Ohio, 1999
  • 3[3]Burns G,Daoud R.Robust MPI Message Delivery with Guaranteed Resources. 1998
  • 4[4]Petersen R. Linux:The Complete Reference. McGrau-Hill,1998

共引文献16

同被引文献13

  • 1任波,王乘.MPI集群通信性能分析[J].计算机工程,2004,30(11):71-73. 被引量:13
  • 2熊盛武,王鲁,杨婕.构建高性能集群计算机系统的关键技术[J].微计算机信息,2006(01X):86-88. 被引量:26
  • 3CHOU Y C, Nestinger S S,Cheng H H. Ch MPI: Interpretive parallel computing in C [J]. Computing in Science and Engineering, 2010,12(2) : 54-67.
  • 4Barry Wilkinson,Michael Allen.并行程序设计[M].陆鑫达,译.北京:北京机械工业出版社,2005.
  • 5R. Clint Whaley, Antoine Petitet, Jack Dongarra. Automated Empirical Optimizations of Software and the ATLAS Project. Parallel Computing, 27(1-2)Page:3-25, 2001.
  • 6William Gropp, Ewing Lusk, Deborah Swider. Improving the Performance of MPI Derived Datatypes. In Proceedings of the Third MPI Developer's and User's Conference,MPI Software Technology Press,25-30, March 1999.
  • 7Surendra Byna, William Gropp, Xian-He Sun, Rajeev Thakur,Improving the Performance of MPI Derived Datatypes by Optimizing Memory-Access Cost, In Proceedings of IEEE International Conference on Cluster Computing, December 2003.
  • 8Ralf Reussner, Jesper Larsson Traff, Gunnar Hunzelmann. A Benchmark for MPI Derived Datatypes. In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 7th European PVM/MPI Users' Group Meeting, Volume 1908 of Lecture Notes in Computer Science, Pages 10-17, 2000.
  • 9Jiesheng Wu, Pete Wyckoff, Dhabaleswar Panda. High Performance Implementation of MPI Derived Datatype Communication over InfiniBand. In Proceedings of the 18th International Parallel and Distributed Processing Symposium, 2004.
  • 10王文凡,张志鸿,申杰.机群环境下并行蒙特卡罗方法的研究与应用[J].微计算机信息,2007,23(31):270-272. 被引量:3

引证文献2

二级引证文献12

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部