期刊文献+
共找到191篇文章
< 1 2 10 >
每页显示 20 50 100
Compute Unified Device Architecture Implementation of Euler/Navier-Stokes Solver on Graphics Processing Unit Desktop Platform for 2-D Compressible Flows
1
作者 Zhang Jiale Chen Hongquan 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2016年第5期536-545,共10页
Personal desktop platform with teraflops peak performance of thousands of cores is realized at the price of conventional workstations using the programmable graphics processing units(GPUs).A GPU-based parallel Euler/N... Personal desktop platform with teraflops peak performance of thousands of cores is realized at the price of conventional workstations using the programmable graphics processing units(GPUs).A GPU-based parallel Euler/Navier-Stokes solver is developed for 2-D compressible flows by using NVIDIA′s Compute Unified Device Architecture(CUDA)programming model in CUDA Fortran programming language.The techniques of implementation of CUDA kernels,double-layered thread hierarchy and variety memory hierarchy are presented to form the GPU-based algorithm of Euler/Navier-Stokes equations.The resulting parallel solver is validated by a set of typical test flow cases.The numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform,which demonstrates that a GPU desktop can serve as a costeffective parallel computing platform to accelerate computational fluid dynamics(CFD)simulations substantially. 展开更多
关键词 graphics processing unit(gpu) gpu parallel computing compute unified device architecture(cuda)Fortran finite volume method(FVM) acceleration
下载PDF
Multi-relaxation-time lattice Boltzmann simulations of lid driven flows using graphics processing unit
2
作者 Chenggong LI J.P.Y.MAA 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2017年第5期707-722,共16页
Large eddy simulation (LES) using the Smagorinsky eddy viscosity model is added to the two-dimensional nine velocity components (D2Q9) lattice Boltzmann equation (LBE) with multi-relaxation-time (MRT) to simul... Large eddy simulation (LES) using the Smagorinsky eddy viscosity model is added to the two-dimensional nine velocity components (D2Q9) lattice Boltzmann equation (LBE) with multi-relaxation-time (MRT) to simulate incompressible turbulent cavity flows with the Reynolds numbers up to 1 × 10^7. To improve the computation efficiency of LBM on the numerical simulations of turbulent flows, the massively parallel computing power from a graphic processing unit (GPU) with a computing unified device architecture (CUDA) is introduced into the MRT-LBE-LES model. The model performs well, compared with the results from others, with an increase of 76 times in computation efficiency. It appears that the higher the Reynolds numbers is, the smaller the Smagorinsky constant should be, if the lattice number is fixed. Also, for a selected high Reynolds number and a selected proper Smagorinsky constant, there is a minimum requirement for the lattice number so that the Smagorinsky eddy viscosity will not be excessively large. 展开更多
关键词 large eddy simulation (LES) multi-relaxation-time (MRT) lattice Boltzmann equation (LBE) two-dimensional nine velocity components (D2Q9) Smagorinskymodel graphic processing unit gpu computing unified device architecture cuda
下载PDF
SOLVERS FOR SYSTEMS OF LARGE SPARSE LINEAR AND NONLINEAR EQUATIONS BASED ON MULTI-GPUS 被引量:3
3
作者 刘沙 钟诚文 陈效鹏 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2011年第3期300-308,共9页
Numerical treatment of engineering application problems often eventually results in a solution of systems of linear or nonlinear equations.The solution process using digital computational devices usually takes tremend... Numerical treatment of engineering application problems often eventually results in a solution of systems of linear or nonlinear equations.The solution process using digital computational devices usually takes tremendous time due to the extremely large size encountered in most real-world engineering applications.So,practical solvers for systems of linear and nonlinear equations based on multi graphic process units(GPUs)are proposed in order to accelerate the solving process.In the linear and nonlinear solvers,the preconditioned bi-conjugate gradient stable(PBi-CGstab)method and the Inexact Newton method are used to achieve the fast and stable convergence behavior.Multi-GPUs are utilized to obtain more data storage that large size problems need. 展开更多
关键词 general purpose graphic process unit(GPgpu compute unified device architecturecuda system of linear equations system of nonlinear equations Inexact Newton method bi-conjugate gradient stable(Bi-CGstab)method
下载PDF
TIME-DOMAIN INTERPOLATION ON GRAPHICS PROCESSING UNIT 被引量:1
4
作者 XIQI LI GUOHUA SHI YUDONG ZHANG 《Journal of Innovative Optical Health Sciences》 SCIE EI CAS 2011年第1期89-95,共7页
The signal processing speed of spectral domain optical coherence tomography(SD-OCT)has become a bottleneck in a lot of medical applications.Recently,a time-domain interpolation method was proposed.This method can get ... The signal processing speed of spectral domain optical coherence tomography(SD-OCT)has become a bottleneck in a lot of medical applications.Recently,a time-domain interpolation method was proposed.This method can get better signal-to-noise ratio(SNR)but much-reduced signal processing time in SD-OCT data processing as compared with the commonly used zeropadding interpolation method.Additionally,the resampled data can be obtained by a few data and coefficients in the cutoff window.Thus,a lot of interpolations can be performed simultaneously.So,this interpolation method is suitable for parallel computing.By using graphics processing unit(GPU)and the compute unified device architecture(CUDA)program model,time-domain interpolation can be accelerated significantly.The computing capability can be achieved more than 250,000 A-lines,200,000 A-lines,and 160,000 A-lines in a second for 2,048 pixel OCT when the cutoff length is L=11,L=21,and L=31,respectively.A frame SD-OCT data(400A-lines×2,048 pixel per line)is acquired and processed on GPU in real time.The results show that signal processing time of SD-OCT can befinished in 6.223 ms when the cutoff length L=21,which is much faster than that on central processing unit(CPU).Real-time signal processing of acquired data can be realized. 展开更多
关键词 Optical coherence tomography real-time signal processing graphics processing unit gpu cuda
下载PDF
GPU based numerical simulation of core shooting process
5
作者 Yi-zhong Zhang Gao-chun Lu +3 位作者 Chang-jiang Ni Tao Jing Lin-long Yang Qin-fang Wu 《China Foundry》 SCIE 2017年第5期392-397,共6页
Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, r... Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model(TFM) and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit(GPU) has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture(CUDA) platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications. 展开更多
关键词 graphics processing unit gpu Compute Unified Device architecture cuda PARALLELIZATIon core shooting process
下载PDF
Falcon后量子算法的密钥树生成部件GPU并行优化设计与实现
6
作者 张磊 赵光岳 +1 位作者 肖超恩 王建新 《计算机工程》 CAS CSCD 北大核心 2024年第9期208-215,共8页
近年来,后量子密码算法因其具有抗量子攻击的特性成为安全领域的研究热点。基于格的Falcon数字签名算法是美国国家标准与技术研究所(NIST)公布的首批4个后量子密码标准算法之一。密钥树生成是Falcon算法的核心部件,在实际运算中占用较... 近年来,后量子密码算法因其具有抗量子攻击的特性成为安全领域的研究热点。基于格的Falcon数字签名算法是美国国家标准与技术研究所(NIST)公布的首批4个后量子密码标准算法之一。密钥树生成是Falcon算法的核心部件,在实际运算中占用较多的时间和消耗较多的资源。为此,提出一种基于图形处理器(GPU)的Falcon密钥树并行生成方案。该方案使用奇偶线程联合控制的单指令多线程(SIMT)并行模式和无中间变量的直接计算模式,达到了提升速度和减少资源占用的目的。基于Python的CUDA平台进行了实验,验证结果的正确性。实验结果表明,Falcon密钥树生成在RTX 3060 Laptop的延迟为6 ms,吞吐量为167次/s,在计算单个Falcon密钥树生成部件时相对于CPU实现了1.17倍的加速比,在同时并行1024个Falcon密钥树生成部件时,GPU相对于CPU的加速比达到了约56倍,在嵌入式Jetson Xavier NX平台上的吞吐量为32次/s。 展开更多
关键词 后量子密码 Falcon算法 图形处理器 cuda平台 并行计算
下载PDF
GPU-based leaves contour generation algorithm
7
作者 张景峤 王廷婷 《Journal of Shanghai University(English Edition)》 CAS 2011年第5期375-380,共6页
The implementation and optimization of the traditional contour generation algorithms are always proposed for the common processor. When processing high resolution images, the performance often exists low efficiency. A... The implementation and optimization of the traditional contour generation algorithms are always proposed for the common processor. When processing high resolution images, the performance often exists low efficiency. A new graphics processing unit (GPU)-based algorithm is proposed to get the clear and integrated contour of leaves. Firstly we implement the classic Sobel operator of edge detection in GPU. Then a simple and effective method is designed to remove the fake edge and a heuristic algorithm is used to repair the broken edge. It is proved by the experiments that the results of our algorithm are natural and realistic in terms of morphology and can be good materials for the virtual plant. 展开更多
关键词 graphics processing unit gpu computer unified device architecture cuda edge detection contour generation
下载PDF
Graphic Processing Unit-Accelerated Neural Network Model for Biological Species Recognition
8
作者 温程璐 潘伟 +1 位作者 陈晓熹 祝青园 《Journal of Donghua University(English Edition)》 EI CAS 2012年第1期5-8,共4页
A graphic processing unit (GPU)-accelerated biological species recognition method using partially connected neural evolutionary network model is introduced in this paper. The partial connected neural evolutionary netw... A graphic processing unit (GPU)-accelerated biological species recognition method using partially connected neural evolutionary network model is introduced in this paper. The partial connected neural evolutionary network adopted in the paper can overcome the disadvantage of traditional neural network with small inputs. The whole image is considered as the input of the neural network, so the maximal features can be kept for recognition. To speed up the recognition process of the neural network, a fast implementation of the partially connected neural network was conducted on NVIDIA Tesla C1060 using the NVIDIA compute unified device architecture (CUDA) framework. Image sets of eight biological species were obtained to test the GPU implementation and counterpart serial CPU implementation, and experiment results showed GPU implementation works effectively on both recognition rate and speed, and gained 343 speedup over its counterpart CPU implementation. Comparing to feature-based recognition method on the same recognition task, the method also achieved an acceptable correct rate of 84.6% when testing on eight biological species. 展开更多
关键词 graphic processing unit(gpu) compute unified device architecture (cuda) neural network species recognition
下载PDF
基于NVIDIA GPU的机载SAR实时成像处理算法CUDA设计与实现 被引量:17
9
作者 孟大地 胡玉新 +2 位作者 石涛 孙蕊 李晓波 《雷达学报(中英文)》 CSCD 2013年第4期481-491,共11页
合成孔径雷达(SAR)成像处理的运算量较大,在基于中央处理器(Central Processing Unit,CPU)的工作站或服务器上一般需要耗费较长的时间,无法满足实时性要求。借助于通用并行计算架构(CUDA)编程架构,该文提出一种基于图形处理器(GPU)的SA... 合成孔径雷达(SAR)成像处理的运算量较大,在基于中央处理器(Central Processing Unit,CPU)的工作站或服务器上一般需要耗费较长的时间,无法满足实时性要求。借助于通用并行计算架构(CUDA)编程架构,该文提出一种基于图形处理器(GPU)的SAR成像处理算法实现方案。该方案解决了GPU显存不足以容纳一景SAR数据时数据处理环节与内存/显存间数据传输环节的并行化问题,并能够支持多GPU设备的并行处理,充分利用了GPU设备的计算资源。在NVIDIA K20C和INTEL E5645上的测试表明,与传统基于GPU的SAR成像处理算法相比,该方案能够达到数十倍的速度提升,显著降低了处理设备的功耗,提高了处理设备的便携性,能够达到每秒约36兆采样点的实时处理速度。 展开更多
关键词 SAR 实时成像 图形处理器(gpu) 通用并行计算架构(cuda)
下载PDF
利用CUDA实现的基于GPU的SAR成像算法 被引量:9
10
作者 柳彬 王开志 +1 位作者 刘兴钊 郁文贤 《信息技术》 2009年第11期62-65,共4页
高速发展的图形处理器(Graphics Processing Unit,GPU)为高效合成孔径雷达(Synthetic Aperture Radar,SAR)成像算法提供了具有发展前景的新型运算平台。与CPU相比,利用GPU进行通用计算具有成本低、性能高的特点。提出利用CUDA实现的基于... 高速发展的图形处理器(Graphics Processing Unit,GPU)为高效合成孔径雷达(Synthetic Aperture Radar,SAR)成像算法提供了具有发展前景的新型运算平台。与CPU相比,利用GPU进行通用计算具有成本低、性能高的特点。提出利用CUDA实现的基于GPU的SAR成像算法,与传统的基于CPU的成像算法相比,有两位数以上的效率提升,为应对SAR信号处理领域新的挑战提供具有前景的研究方向。 展开更多
关键词 合成孔径雷达 成像算法 图形处理器 cuda
下载PDF
基于CUDA的热传导GPU并行算法研究 被引量:3
11
作者 孟小华 黄丛珊 朱丽莎 《计算机工程》 CAS CSCD 2014年第5期41-44,48,共5页
在热传导算法中,使用传统的CPU串行算法或MPI并行算法处理大批量粒子时,存在执行效率低、处理时间长的问题。而图形处理单元(GPU)具有大数据量并行运算的优势,为此,在统一计算设备架构(CUDA)并行编程环境下,采用CPU和GPU协同合作的模式... 在热传导算法中,使用传统的CPU串行算法或MPI并行算法处理大批量粒子时,存在执行效率低、处理时间长的问题。而图形处理单元(GPU)具有大数据量并行运算的优势,为此,在统一计算设备架构(CUDA)并行编程环境下,采用CPU和GPU协同合作的模式,提出并实现一个基于CUDA的热传导GPU并行算法。根据GPU硬件配置设定Block和Grid的大小,将粒子划分为若干个block,粒子输入到GPU显卡中并行计算,每一个线程执行一个粒子计算,并将结果传回CPU主存,由CPU计算出每个粒子的平均热流。实验结果表明,与CPU串行算法在时间效率方面进行对比,该算法在粒子数到达16 000时,加速比提高近900倍,并且加速比随着粒子数的增加而加速提高。 展开更多
关键词 热传导算法 图形处理单元 统一计算设备架构 并行计算 时间效率 加速比
下载PDF
基于GPGPU和CUDA的高速AES算法的实现和优化 被引量:3
12
作者 顾青 高能 +1 位作者 包珍珍 向继 《中国科学院研究生院学报》 CAS CSCD 北大核心 2011年第6期776-785,共10页
随着高性能计算需求的不断增长,人们开始将目光投向具有强大计算能力及高存储带宽的GPU设备.与擅长处理复杂性逻辑事务的CPU相比,GPGPU(general purpose graphicprocessing unit,通用图形处理器)更适合于大规模数据并行处理.CUDA(comput... 随着高性能计算需求的不断增长,人们开始将目光投向具有强大计算能力及高存储带宽的GPU设备.与擅长处理复杂性逻辑事务的CPU相比,GPGPU(general purpose graphicprocessing unit,通用图形处理器)更适合于大规模数据并行处理.CUDA(compute unified devicearchitecture,统一计算架构)的出现更加速了GPGPU应用面的扩张.基于GPGPU和CUDA技术对AES算法的实现进行加速,得到整体吞吐量6~7Gbit/s的速度.如果不考虑数据加载时间,对于1MB以上的输入规模,吞吐量可以达到20Gbit/s. 展开更多
关键词 通用图像处理器 统一计算架构 AES算法 并行计算
下载PDF
CUDA-TP:基于GPU的自顶向下完整蛋白质鉴定并行算法 被引量:1
13
作者 段琼 田博 +2 位作者 陈征 王洁 何增有 《计算机研究与发展》 EI CSCD 北大核心 2018年第7期1525-1538,共14页
蛋白质及蛋白质翻译后修饰(post-translational modifications,PTMs)的鉴定是蛋白质组学研究的基础,对整个领域的进一步发展有着十分重要的意义.近年来,质谱设备的快速发展使得获取"自顶向下"(top-down,TD)的高精度完整蛋白... 蛋白质及蛋白质翻译后修饰(post-translational modifications,PTMs)的鉴定是蛋白质组学研究的基础,对整个领域的进一步发展有着十分重要的意义.近年来,质谱设备的快速发展使得获取"自顶向下"(top-down,TD)的高精度完整蛋白质质谱数据成为可能.目前基于TD质谱数据的完整蛋白质鉴定算法虽然在匹配精度、PTM位点的推断上取得了一些成效,但它们运行时间还有很大的不足和提升空间.利用图形处理器(graphics processing unit,GPU)可以将大规模的重复计算并行化,提高串行程序的执行速度.CUDA-TP算法基于通用并行计算架构(compute unified device architecture,CUDA)来计算蛋白质与TD质谱数据的匹配分数.首先,对每一个质谱数据,CUDA-TP利用优化的MS-Filter算法在蛋白质数据库中过滤出其对应的少数候选蛋白质集合,然后通过AVL(adelson-velskii and landis)树加速质谱匹配过程.GPU中的多线程技术被用来并行化谱图网格及最终数组中所有元素的前驱结点的求解.同时,该算法还使用target-decoy策略来控制蛋白质与质谱图匹配结果的错误发现率(false discovery rate,FDR).实验结果表明:CUDA-TP算法能够有效地加速完整蛋白质的鉴定,速度分别比MS-TopDown和MS-Align+快10倍与2倍.到目前为止,这是唯一能够利用CUDA架构来加速完整蛋白质鉴定的研究工作.CUDA-TP源代码公布在https://github.com/dqiong/CUDA-TP. 展开更多
关键词 “自顶向下”蛋白质组学 蛋白质鉴定 图形处理器 通用并行计算架构 谱图比对
下载PDF
CUDA平台下多核GPU高性能并行编程研究 被引量:1
14
作者 吴长茂 张聪品 +1 位作者 张慧云 王娟 《河南机电高等专科学校学报》 CAS 2011年第1期19-21,29,共4页
现代GPU拥有强大的计算能力。文中提出了利用GPU解决高性能计算的问题,包括GPU编程的方法、高性能计算问题的划分原则等。实验表明,GPU高性能计算相比多核CPU具有更高的效率。
关键词 gpu cuda 并行
下载PDF
基于GPU的B-S模型下改进的Crank Nicolson算法
15
作者 王文浩 邬春学 《上海理工大学学报》 CAS 北大核心 2013年第2期147-151,156,共6页
针对Black-Scholes模型及其公式特点进行了理论分析与数学处理,给出了优化的Crank-Nicolson算法,提高了实际期权交易效率.通过使用GPU作为计算平台,结合CUDA架构技术,验证改进后算法的有效性和适用性.在CPU平台下进行横向测试,验证GPU... 针对Black-Scholes模型及其公式特点进行了理论分析与数学处理,给出了优化的Crank-Nicolson算法,提高了实际期权交易效率.通过使用GPU作为计算平台,结合CUDA架构技术,验证改进后算法的有效性和适用性.在CPU平台下进行横向测试,验证GPU平台运行环境优势.实验表明,改进后的算法在GPU平台下运行所提升的效果显著,运算精度和效率得到提高. 展开更多
关键词 金融期权计算 B—S模型 改进的C—N算法 gpu cuda构架 HPC
下载PDF
基于GPU通用计算CUDA架构的人体检测技术
16
作者 周晓阳 《信息化研究》 2012年第2期41-43,共3页
随着计算机硬件技术的高速发展,图形处理器(Graphic processing unit,GPU)通用计算已经发展到颇为成熟阶段,其并行运算速度已远远超过多核CPU。文章简介CUDA架构并验证其在图形处理中的加速能力,对比线性代数运算在CPU与GPU架构下的效率... 随着计算机硬件技术的高速发展,图形处理器(Graphic processing unit,GPU)通用计算已经发展到颇为成熟阶段,其并行运算速度已远远超过多核CPU。文章简介CUDA架构并验证其在图形处理中的加速能力,对比线性代数运算在CPU与GPU架构下的效率,将CUDA技术应用于智能视频监控人体检测系统中,实验验证其高效性及可行性。最后对CUDA的发展方向进行了展望。 展开更多
关键词 图形处理器 并行计算架构 人体检测 视频监控
下载PDF
Research on GPU-accelerated algorithm in 3D finite difference neutron diffusion calculation method
17
作者 徐琪 余纲林 +1 位作者 王侃 孙嘉龙 《Nuclear Science and Techniques》 SCIE CAS CSCD 2014年第1期59-63,共5页
In this paper,the adaptability of the neutron diffusion numerical algorithm on GPUs was studied,and a GPUaccelerated multi-group 3D neutron diffusion code based on finite difference method was developed.The IAEA3D PWR... In this paper,the adaptability of the neutron diffusion numerical algorithm on GPUs was studied,and a GPUaccelerated multi-group 3D neutron diffusion code based on finite difference method was developed.The IAEA3D PWR benchmark problem was calculated in the numerical test.The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. 展开更多
关键词 中子扩散方程 三维有限差分 gpu 加速算法 计算 国际原子能机构 有限差分法 数值算法
下载PDF
Real-time Virtual Environment Signal Extraction and Denoising Using Programmable Graphics Hardware
18
作者 Yang Su Zhi-Jie Xu Xiang-Qian Jiang 《International Journal of Automation and computing》 EI 2009年第4期326-334,共9页
The sense of being within a three-dimensional (3D) space and interacting with virtual 3D objects in a computer-generated virtual environment (VE) often requires essential image, vision and sensor signal processing... The sense of being within a three-dimensional (3D) space and interacting with virtual 3D objects in a computer-generated virtual environment (VE) often requires essential image, vision and sensor signal processing techniques such as differentiating and denoising. This paper describes novel implementations of the Gaussian filtering for characteristic signal extraction and waveletbased image denoising algorithms that run on the graphics processing unit (GPU). While significant acceleration over standard CPU implementations is obtained through exploiting data parallelism provided by the modern programmable graphics hardware, the CPU can be freed up to run other computations more efficiently such as artificial intelligence (AI) and physics. The proposed GPU-based Gaussian filtering can extract surface information from a real object and provide its material features for rendering and illumination. The wavelet-based signal denoising for large size digital images realized in this project provided better realism for VE visualization without sacrificing real-time and interactive performances of an application. 展开更多
关键词 Virtual environment graphics processing unit gpu-based Gaussian filtering signal denoising WAVELET
下载PDF
NTRU格基密钥封装方案GPU高性能实现
19
作者 李文倩 沈诗羽 赵运磊 《计算机学报》 EI CAS CSCD 北大核心 2024年第9期2163-2178,共16页
随着量子计算技术的发展,传统加密算法受到的威胁日益严重.为应对量子计算时代的挑战,各国正积极加强后量子密码算法的实现和迁移部署工作.由于NTRU密码方案具有结构简洁、计算效率高、尺寸较小、无专利风险等优点,因此NTRU格基密钥封... 随着量子计算技术的发展,传统加密算法受到的威胁日益严重.为应对量子计算时代的挑战,各国正积极加强后量子密码算法的实现和迁移部署工作.由于NTRU密码方案具有结构简洁、计算效率高、尺寸较小、无专利风险等优点,因此NTRU格基密钥封装算法对于后量子时代的密码技术储备和应用具有重要意义.同时,图形处理器(Graphics Processing Unit,GPU)以其强大的并行计算能力、高吞吐量、低能耗等特性,已成为当前高并发密码工程实现的重要平台.本文给出后量子密码算法CTRU/CNTR的首个GPU高性能实现方案.对GPU主要资源占用进行分析,我们综合考虑并行计算、内存访问、数据布局和算法优化等多个方面,采用一系列计算和内存优化技术,旨在并行加速计算、优化访存、合理占用GPU资源以及减少I/O时延,从而提高本方案的计算能力和性能.本文的主要贡献在于以下几个方面:首先,针对模约减操作,使用NVIDIA并行指令集实现,有效减少所需指令条数;其次,针对耗时的多项式乘法模块,采用混合基NTT,并采用层融合、循环展开和延迟约减等方法,加快计算速度;此外,针对内存重复访问和冲突访问等问题,通过合并访存、核函数融合等优化技术,实现内存的高效访问;最后,为实现高并行的算法,设计恰当的线程块大小和数量,采用内存池机制,实现多任务的快速访存和高效处理.基于NVIDIA RTX4090平台,本方案CTRU768实现中密钥生成、封装和解封装的吞吐量分别为每秒1170.9万次、926.7万次和315.4万次.与参考实现相比,密钥生成、封装和解封装的吞吐量分别提高了336倍、174倍和128倍.本方案CNTR768实现中密钥生成、封装和解封装的吞吐量分别为每秒1117.3万次、971.8万次和322.2万次.与参考实现相比,密钥生成、封装和解封装的吞吐量分别提高了329倍、175倍和134倍;与开源Kyber实现相比,密钥生成、密钥封装和密钥解封装的吞吐量分别提升10.84~11.36倍、9.49~9.95倍和5.11~5.22倍.高性能的密钥封装实现在大规模任务处理场景下具有较大的应用潜力,对保障后量子时代的信息和数据安全具有重要意义. 展开更多
关键词 后量子密码 格基密码 密钥封装方案 并行处理 图形处理器
下载PDF
三维交错网格有限差分地震波模拟的GPU集群实现 被引量:22
20
作者 龙桂华 赵宇波 +2 位作者 李小凡 高琴 王周 《地球物理学进展》 CSCD 北大核心 2011年第6期1938-1949,共12页
有限差分实现简单、速度快,作为地震波场模拟一种有效数值方法,被广泛用于正演计算密集的波形反演和逆时偏移中.三维地震波正演模拟计算量大,一直以来制约着三维叠前逆时偏移和反演的工业化应用,GPU通用计算技术的产生及其内在的数据并... 有限差分实现简单、速度快,作为地震波场模拟一种有效数值方法,被广泛用于正演计算密集的波形反演和逆时偏移中.三维地震波正演模拟计算量大,一直以来制约着三维叠前逆时偏移和反演的工业化应用,GPU通用计算技术的产生及其内在的数据并行性有望改变这一现状.本文通过分析三维交错网格有限差分方法在GPU上的实施,利用片内共享存储器实现了三维地震波数值模拟的高效算法,取得了较单核CPU快79x~108x的加速比;通过区域分解技术将单GPU上不能计算的地质体模型沿Z轴方向进行粗粒度分解,采用消息传递接口交换边界数据,运用MPI+CUDA的方式实现了大尺度三维地震波场模拟,并着重分析了影响GPU并行计算效率的一些关键因素.大尺度三维地震波场模拟的加速实现,为促进叠前逆时偏移和波形反演技术的工业化转化提供了可能,因此具有重要的研究意义. 展开更多
关键词 gpu 交错网格 有限差分 图形处理器 cuda
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部