期刊文献+

权重量化的深度神经网络模型压缩算法 被引量:9

Compression algorithm for weights quantized deep neural network models
下载PDF
导出
摘要 深度神经网络模型通常存在大量的权重参数,为了减少其对存储空间的占用,提出权重量化的深度神经网络模型压缩算法。在前向传播过程中,使用一个四值滤波器将全精度权重量化为2、1、-1和-2四种状态,以进行高效的权重编码。最小化全精度权重与缩放后四值权重的L2距离,以获得精确的四值权重模型。使用一个32位二进制数对16个四值权重进行编码压缩,以大幅度压缩模型。在MNIST、CIFAR-10和CIFAR-100数据集上的实验表明,该算法分别获得了6.74%、6.88%和6.62%的模型压缩率,与三值权重网络的相同,但准确率分别提升了0.06%、0.82%和1.51%。结果表明,该算法可提供高效、精确的深度神经网络模型压缩。 There is a large number of weight parameters in deep neural network models.In order to reduce the storage space of deep neural network models,a compression algorithm for weights quantization is proposed.In the forward propagation process,a four-value filter is utilized for quantizing full-precision weights into four states as 2,1,-1,and-2 to encode weights efficiently.In order to obtain an accurate four-value weights model,the L2 distance between full-precision weights and scaled four-value weights is minimized.To further improve the compression of the model,16 four-value weights are encoded and compressed using a 32-bit binary number.Experimental results on the datasets of MNIST,CIFAR-10 and CIFAR-100 show that the model compression ratio of the algorithm is the same as that for the TWN(Ternary Weight Network),which is 6.74%,6.88%and 6.62%,respectively.Also,the accuracy rate is increased by 0.06%,0.82% and 1.51%.The results indicate that the algorithm can provide efficient and accurate compression of deep neural network models.
作者 陈昀 蔡晓东 梁晓曦 王萌 CHEN Yun;CAI Xiaodong;LIANG Xiaoxi;WANG Meng(School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China)
出处 《西安电子科技大学学报》 EI CAS CSCD 北大核心 2019年第2期132-138,共7页 Journal of Xidian University
基金 2014年广西物联网技术及产业化推进协同创新中心项目(WLW200601) 2016年广西科技计划重点研发计划(AB16380264) 2016年"认知无线电与信息处理"省部共建教育部重点实验室基金(CRKL160102)
关键词 权重量化 压缩 四值滤波器 存储空间 全精度 weights quantization compression four-value filter storage space full-precision
  • 相关文献

同被引文献74

引证文献9

二级引证文献14

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部