期刊文献+

一种提高SpMV向量化性能的新型稀疏矩阵存储格式 被引量:4

A NEW SPARSE MATRIX STORAGE FORMAT FOR IMPROVING SPMV PERFORMANCE BY SIMD
原文传递
导出
摘要 稀疏矩阵向量乘(SpMV)是科学与工程计算中一个重要的核心函数,但在当前基于存储器层次结构的计算平台上,传统CSR(Compressed Sparse Row)存储的稀疏矩阵向量乘性能较低,运行效率往往远低于硬件浮点峰值的10%.目前现有的处理器架构一般都采用SIMD向量化技术进行加速,但是传统CSR格式的稀疏矩阵向量乘由于访存的不规则性,不能直接采用向量化技术进行加速,为了利用SIMD技术,对具有局部性特征的稀疏矩阵,提出了新的稀疏矩阵存储格式CSRL(Compressed Sparse Row with Local information),该格式可以减少SpMV时内存访问次数,并且能够充分利用硬件的SIMD向量化技术进行读取和计算,提高了SpMV性能.实验表明,该方法相比国际著名商业库Intel MKL10.3版平均性能提升达到29.5%,最高可达89%的性能提升. Sparse matrix-vector multiplication (SpMV) is an important computational kernel in scientific and engineering applications. The performance of SpMV by using traditional CSR format is often far below 10% of the peak performance on modern processors with memory hierarchy. When using tile CSR format for SpMV, it is often hard to directly take advantage of the SIMD acceleration technology on mordern processors, due to irregular memory access pattern. In order to use the SIMD technology, a new storage format for sparse matrices, CSRL (Compressed Sparse Row with Local information), is proposed.The CSRL format has locality characteristic, and is SIMD-friendly. The new format reduces the nun, her of memory access and improves the SpMV performance. Experiments show that, compared with the implementation in Intel MKL library (version 10.3), the SpMV based on the CSRL format gains an average of 29.5% and maximum of 89%performance improvement.
作者 刘芳芳 杨超
出处 《数值计算与计算机应用》 CSCD 2014年第4期269-276,共8页 Journal on Numerical Methods and Computer Applications
基金 国家自然科学基金项目(61170075 91130023) 国家973项目2011CB309701 国家863项目2012AA010903资助
关键词 稀疏矩阵 稀疏矩阵向量乘 向量化 局部性 CSRL Sparse matrix S1MD SpMV CSRL
  • 相关文献

参考文献12

  • 1Saad Y. Iterative methods for sparse linear systems[M]. SIAM, 2003.
  • 2Vuduc R, Demmel J W, Yelick K A. OSKI: A library of automatically tuned sparse matrixkernels[C]. Journal of Physics: Conference Series. IOP Publishing, 2005, 16(1): 521.
  • 3Willcock J, Lumsdaine A. Accelerating sparse matrix computations via data compression[C].Proceedings of the 20th Annual International Conference on Supercomputing. ACM, 2006, 307-316.
  • 4Kourtis K, Goumas G, Koziris N. Optimizing sparse matrix-vector multiplication using index andvalue compression[C]. Proceedings of the 5th conference on Computing Frontiers. ACM, 2008,87-96.
  • 5Kourtis K, Karakasis V, Goumas G, et al. CSX: an extended compression format for SpMV onshared memory systems[C]. Proceedings of the 16th ACM Symposium on Principles and Practiceof Parallel Programming. ACM, 2011, 247-256.
  • 6Sun X, Zhang Y, Wang T, et al. CRSD: application specific auto-tuning of SpMV for diagonalsparse matrices[M]. Euro-Par 2011 Parallel Processing. Springer Berlin Heidelberg, 2011, 316-327.
  • 7Li J, Tan G, Chen M, et al. SMAT: an input adaptive auto-tuner for sparse matrix-vector multiplication[C]. Proceedings of the 34th ACM SIGPLAN Conference on Programming LanguageDesign and Implementation. ACM, 2013, 117-126.
  • 8Im E J, Yelick K. Optimizing sparse matrix computations for register reuse in SPARSITY[M].Computational Science—ICCS 2001. Springer Berlin Heidelberg, 2001, 127-136.
  • 9D'Azevedo E F, Fahey M R, Mills R T. Vectorized sparse matrix multiply for compressed rowstorage format[M]. Computational Science—CICCS 2005. Springer Berlin Heidelberg, 2005, 99-106.
  • 10Williams S, Oliker L, Vuduc R, et al. Optimization of sparse matrix-vector multiplication onemerging multicore platforms[J]. Parallel Computing, 2009, 35(3): 178-194.

二级参考文献12

  • 1袁伟,张云泉,孙家昶,李玉成.国产万亿次机群系统NPB性能测试分析[J].计算机研究与发展,2005,42(6):1079-1084. 被引量:13
  • 2Vuduc Wilson.Automatic Performance of Sparse Matrix Kernels[D].Berkeley,CA:University of California,2003.
  • 3Im Eun Jin,Yelick Katherine.Optimizing sparse matrix computations for register reuse in SPARSITY[G] //LNCS 2073,Proc of the Int Conf on Computational Science.Berlin,Springer,2001,127-136.
  • 4Im Eun Jin,Yelick Katherine,Vudue Wilson.Sparsity,Optimization framework for fparse matrix kernels[J].International Journal of High Performance Computing Applications,2004,18(1):135-158.
  • 5Vuduc Wilson,Demmel James,Yelick Katherine,et al.Performance optimizarions and bounds for sparse matrixvector multiply[C] //Proc of Supercomputing.Los Alamitos,CA:IEEE Computer Society,2002= 1-35.
  • 6Vuduc Wilson,Demmel James,Bilmes Jeff.Statistical models for empirical search-based performance tuning[J].International Journal of High Performance Computing Applications,2004,18(1):65-94.
  • 7Demmel James,Yelick Katherine.Berkeley Benchmarking and OPtimization Project[OL].2006 [2007-11-20],http:// bebop.cs.berkeley.edu/.
  • 8Voduc Wilson,Demmel James,Yelick Katherine.OSKI,A library of automatically tuned sparse matrix kernels[C] //Proc of SciDAC 2005:Journal of Physics,Conference Series.Philadelphia,PA:IOP,2005:521-530.
  • 9Davis Tim.University of Florida sparse matrix collection[OL].2006[2007-11-20].http://www.else.ufl.edu/ research/sparse/matrices/.
  • 10张云泉.面向数值计算的并行计算模型DRAM(h.k)[C]//863计划智能计算机主题学术会议论文集:智能计算机研究进展.北京,清华大学出版社,2001:218-225.

共引文献14

同被引文献22

引证文献4

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部