摘要
近几年来,GPU因拥有比CPU更强大的浮点性能备受瞩目。NVIDIA推出的CUDA架构,使得GPU上的通用计算成为现实。本文将计算流体力学中Benchmark问题的二维扩散方程移植到GPU,并采用了全局存储和纹理存储两种方法。结果显示,当网格达到百万量级的时候,得到了34倍的加速。
In recent years, GPU has been enjoying the reputation of possessing much higher floating-point capacity than CPU. CUDA, which is developed by NVIDIA, realizes GPU-based general purpose computing. In this paper, we implement a CFD benchmark problem call 2D diffusion on GPU with two strategies global memory and texture memory. A speed-up of 34 is observed when the grid is on the scale of one million.
出处
《计算机工程与科学》
CSCD
北大核心
2009年第11期121-123,127,共4页
Computer Engineering & Science
基金
中国科学院知识创新工程青年人才领域项目(0815011103)