期刊文献+

图形渲染管线中顶点索引压缩方法

Vertex Index Compression Method of Graph Rendering Pipeline
下载PDF
导出
摘要 提高性能和降低功耗一直以来都是高性能GPU的关键技术所在。在图形渲染管线中,访问外部存储器是必不可少的操作之一,但是频繁地访问存储会导致系统性能降低,功耗增大。因此,在图形处理流水线中采用数据压缩是一种常用的手段,通过数据压缩可以减少数据传输位宽,减少访问外部存储的次数,相应地提高系统性能。论文在分析和总结渲染管线的绘制模式和顶点索引的数据特征的基础上,提出了一种针对顶点索引存储优化的方法,可以大幅度减少顶点索引的存储空间,提高其利用率。论文对提出的压缩算法进行了建模,并搭建了System Verilog的仿真平台对不同测试用例下的顶点索引进行了仿真,统计其压缩比。由测试结果得出,该方法可以较好地减小存储资源的占用。 Improving performance and reducing power consumption have always been the key technologies of high performance GPU.In the pipeline of graphics rendering,accessing external memory is one of the essential operations,but frequent access to memory will lead to reduce system performance and increase power consumption.Therefore,data compression is a common method in graphics processing pipeline.Data compression can reduce the data transmission bit width,reduce the number of times to access external memory,which correspondingly improves the system performance.Based on the analysis and summary of rendering mode of pipeline and data characteristics of vertex index,this paper proposes a method of vertex index memory optimization,which can greatly reduce the memory space of vertex index and improve its utilization rate.In this paper,the compression algorithm proposed is modeled,and the simulation platform of System Verilog is built to carry out simulation test on vertex index under different test cases,and the compression ratio is calculated.According to the test results,this method can reduce the consumption of memory resources.
作者 王可 杜慧敏 黄虎才 刘世豪 刘鑫 WANG Ke;DU Huimin;HUANG Hucai;LIU Shihao;LIU Xin(School of Electronic Engineering,Xi'an University of Posts and Telecommunications,Xi'an 710121)
出处 《计算机与数字工程》 2019年第11期2691-2695,共5页 Computer & Digital Engineering
关键词 GPU 渲染管线 顶点索引 绘制模式 无损压缩 GPU rendering pipeline vertex index draw mode lossless compression
  • 相关文献

参考文献7

二级参考文献79

  • 1胡建彰,李炜,陈江涛.JPEG──算法与实现[J].南京邮电学院学报,1994,14(3):43-50. 被引量:7
  • 2NVIDIA Corporation.NVIDIA's Next Generation CUDA Compute Architecture:Fermi Whitepaper[EB/OL].http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf,2010-04-01.
  • 3NVIDIA Corporation.OpenCL Overview[EB/OL].http://www.khronos.org/developers/library/opencloverview.pdf,2010-04-01.
  • 4D.Luebke,G.Humphreys.How GPUs work?[J].Computer,2007,40(2):96-100.
  • 5Pat Hanrahan.Why are Graphics Systems so Fast?[C]//18th International Conference on Parallel Architectures and Compilation Techniques,Sept.2009:1p.
  • 6William Mark.Future Graphics Architectures[J].ACM Queue,March/April 2008:56-64.
  • 7J.Nickolls,W.J.Dally.The gpu computing era[J].IEEE Micro,2010,30(2):56-69.
  • 8Haselman,M.Hauck,S.The Future of Integrated Circuits:A Survey of Nanoelectr-onics[J].Proceedings of the IEEE,2010,98(1):11-38.
  • 9J.Nickolls,I.Buck,M.Garland.Scalable Parallel Programming with CUDA[J].ACM Queue,March/April 2008:42-53.
  • 10沈绪榜.MPP计算机的发展对策研究咨询报告[R].北京:中国科学院学部MPP计算机的发展对策研究咨询课题组,2010.3.

共引文献48

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部