摘要
为解决深度卷积神经网络模型占用存储空间较大的问题,提出一种基于K-SVD字典学习的卷积神经网络压缩方法。用字典中少数原子的线性组合来近似表示单个卷积核的参数,对原子的系数进行量化,存储卷积核参数时,只须存储原子的索引及其量化后的系数,达到模型压缩的目的。在MNIST数据集上对LeNet-C5和CIFAR-10数据集上对DenseNet的压缩实验结果表明,在准确率波动不足0.1%的情况下,将网络模型占用的存储空间降低至12%左右。
To solve the problem that the deep convolutional neural network model occupies a large storage space,a convolutional neural network compression method based on K-SVD dictionary learning was proposed.The main idea was to approximate the parameters of a single convolution kernel by linear combination of a few atoms in the dictionary to achieve the purpose of model compression.The compression experiments of LeNet-C5 on the MNIST datasets and DenseNet on CIFAR-10 datasets show that the space occupied by the storage network model is reduced to about 12%when the accuracy fluctuation is less than 0.1%.
作者
耿旭
张永辉
张健
GENG Xu;ZHANG Yong-hui;ZHANG Jian(School of Information and Communication Engineering,Hainan University,Haikou 570228,China)
出处
《计算机工程与设计》
北大核心
2020年第4期1024-1028,共5页
Computer Engineering and Design
基金
海南省自然科学基金项目(618MS027)
海南省科技厅基金项目(ZDYF2019024)。
关键词
卷积神经网络
字典学习
压缩
量化
稀疏矩阵
convolutional neural network
dictionary learning
compression
quantization
sparse matrix