摘要
随着深度学习的发展,神经网络模型参数的数量越来越大,消耗了大量的存储与计算资源。而在面向图像压缩应用的自编码神经网络中,其编码器网络和解码器网络往往占用着更大的存储空间。因此,文中提出了一种基于参数量化的轻量级图像压缩神经网络,采用训练中参数量化的方法将模型参数从32位浮点型量化到8位整型。实验结果表明,相比原始模型,提出的轻量级图像压缩神经网络模型节约了73%的存储空间。在图像压缩码率小于0.16bpp的条件下,重建图像的多尺度结构相似度指标MS-SSIM仅损失1.68%,依然优于经典压缩标准JPEG与JPEG2000。
With the development of Deep Learning,the number of neural network model parameters is getting larger and larger,consuming a large amount of storage and computing resources.In the auto-encoder neural network for image compression applications,the encoder network and decoder network often occupy more storage space.Therefore,a lightweight image compression neural network model is proposed based on parameter quantization.The method of aware training quantization is used to quantize the model parameters from 32-bit floating point to 8-bit integer.The experimental results show that compared to the original model,the proposed lightweight image compression neural network model saves 73%of storage space.When the image compression bit-rate is less than 0.16bpp,MS-SSIM(multi-scale structure similarity)of the reconstructed image just loses 1.68%,and is still higher than the classic compression standards JPEG and JPEG2000.
作者
孙浩然
王伟
陈海宝
SUN Hao-ran;WANG Wei;CHEN Hai-bao(School of Microelectronics,Shanghai Jiaotong University,Shanghai 200240,China;Beijing Institute of Astronautical Systems Engineering,Beijing 100076,China)
出处
《信息技术》
2020年第10期87-91,共5页
Information Technology
关键词
参数量化
模型压缩
图像压缩
神经网络
parameter quantification
model compression
image compression
neural network