期刊文献+

Pragma Directed Shared Memory Centric Optimizations on GPUs 被引量:1

Pragma Directed Shared Memory Centric Optimizations on GPUs
原文传递
导出
摘要 GPUs become a ubiquitous choice as coprocessors since they have excellent ability in concurrent processing. In GPU architecture, shared memory plays a very important role in system performance as it can largely improve bandwidth utilization and accelerate memory operations. However, even for affine GPU applications that contain regular access patterns, optimizing for shared memory is not an easy work. It often requires programmer expertise and nontrivial parameter selection. Improper shared memory usage might even underutilize GPU resource: Even using state-of-the-art high level programming models (e.g., OpenACC and OpenHMPP), it is still hard to utilize shared memory since they lack inherent support in describing shared memory optimization and selecting suitable parameters, let alone maintaining high resource utilization. Targeting higher productivity for affine applications, we propose a data centric way to shared memory optimization on GPU. We design a pragma extension on OpenACC so as to convey data management hints of programmers to compiler. Meanwhile, we devise a compiler framework to automatically select optimal parameters for shared arrays, using the polyhedral model. We further propose optimization techniques to expose higher memory and instruction level parallelism. The experimental results show that our shared memory centric approaches effectively improve the performance of five typical GPU applications across four widely used platforms by 3.7x on average, and do not burden programmers with lots of pragmas. GPUs become a ubiquitous choice as coprocessors since they have excellent ability in concurrent processing. In GPU architecture, shared memory plays a very important role in system performance as it can largely improve bandwidth utilization and accelerate memory operations. However, even for affine GPU applications that contain regular access patterns, optimizing for shared memory is not an easy work. It often requires programmer expertise and nontrivial parameter selection. Improper shared memory usage might even underutilize GPU resource: Even using state-of-the-art high level programming models (e.g., OpenACC and OpenHMPP), it is still hard to utilize shared memory since they lack inherent support in describing shared memory optimization and selecting suitable parameters, let alone maintaining high resource utilization. Targeting higher productivity for affine applications, we propose a data centric way to shared memory optimization on GPU. We design a pragma extension on OpenACC so as to convey data management hints of programmers to compiler. Meanwhile, we devise a compiler framework to automatically select optimal parameters for shared arrays, using the polyhedral model. We further propose optimization techniques to expose higher memory and instruction level parallelism. The experimental results show that our shared memory centric approaches effectively improve the performance of five typical GPU applications across four widely used platforms by 3.7x on average, and do not burden programmers with lots of pragmas.
出处 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第2期235-252,共18页 计算机科学技术学报(英文版)
基金 This work was supported by the National High Technology Research and Development 863 Program of China under Grant No. 2012AA010902, the National Natural Science Foundation of China (NSFC) under Grant No. 61432018, and the Innovation Research Group of NSFC under Grant No. 61221062.
关键词 GPU shared memory pragma directed data centric GPU, shared memory, pragma directed, data centric
  • 相关文献

参考文献34

  • 1Ruetsch G, Micikevicius P. Optimizing matrix trans- pose in CUDA. http://www.cs.colostate.edu/cs675/Mat- rixTranspose.pdf, Jan. 2009.
  • 2Fujimoto N. Faster matrix-vector multiplication on GeForce 8800GTX. In Proc. IEEE International Symposium on Parallel and Distributed Processing, Apr. 2008.
  • 3Van Werkhoven 13, Maassen J, Bal H E, Seinstra F J. Op- timizing convolution operations on GPUs using adaptive tiling. Future Gener. Comput. Syst., 2014, 30: 14-26.
  • 4Nguyen A, Satish N, Chhugani J, Kim C, Dubey P. 3.5-D blocking optimization for stencil computations on modern CPUs and GPUs. In Proe. the 2010 ACM/IEEE Interna- tional Conference for High Performance Computing, Net- working, Storage and Analysis, Nov. 2010.
  • 5Yang Y, Xiang P, Kong J, Zhou H. A GPGPU compiler for memory optimization and parallelism management. In Proe. the 31st ACM SIGPLAN Conference on Program- ming Language Design and Implementation, Jun. 2010 pp.86-97.
  • 6Kandemir M, Kadayif I, Sezer U. Exploiting scratch-pad memory using Presburger formulas. In Proc. the 14th In- ternational Symposium on Systems Synthesis, Sept. 2001, pp.7-12.
  • 7Ueng S Z, Lathara M, Baghsorkhi S, Hwu W. CUDA-Lite: Reducing GPU programming complexity. In Proc. the Lan- guages and Compilers for Parallel Computing, July 3-Aug. 2, 2008, pp.l-15.
  • 8Yang Y, Xiang P, Mantor M, Rubin N, Zhou H. Shared memory multiplexing: A novel way to improve GPGPU throughput. In Proe. the 21st International Conference on Parallel Architectures and Compilation Techniques, Sept. 2012, pp.283-292.
  • 9Jablin J A, Jablin T B, Mutlu O, Herlihy M. Warp-aware trace scheduling for GPUs. In Proc. the 23rd Interna- tional Conference on Parallel Architectures and Compila- tion, Aug. 2014, pp.163-174.
  • 10Schgfer A, Fey D. High performance stencil code algorithms for GPGPUs. Procedia Computer Science, 2011, 4: 2027- 2036.

同被引文献1

引证文献1

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部