摘要
卷积神经网络的研究取得一系列突破性成果,其优秀表现是由深层结构支撑的。针对复杂的卷积神经网络在参数量及计算量上存在大量的冗余问题,提出一种简洁有效的网络模型压缩算法。首先,通过计算卷积核之间的皮尔逊相关系数判断相关性,循环删除冗余参数,从而压缩卷积层。其次,采用局部-全局的微调策略,恢复网络性能。最后,提出一种参数正交正则,促使卷积核之间的正交化,进而减少冗余特征。实验结果表明,在MNIST数据集上,该压缩算法能够在不损失测试精度的前提下,使AlexNet卷积层的参数量压缩率达到53.2%,浮点操作计算量可以减少42.8%,并且网络模型收敛后具有较小的误差。
Convolutional neural network has achieved a series of breakthrough research results,and its superior performance is supported by deep structure.In order to solve the problem of the large amount of redundancy in parameters and computation of complex convolutional neural network,a concise and effective network model compression algorithm is proposed.Firstly,the correlation is judged by calculating the Pearson correlation coefficient between convolution kernels,and the redundant parameters are deleted circularly to compress the convolution layer.Secondly,a local-global fine tuning strategy is adopted to restore the network performance.Finally,a parameter orthogonality regularization is proposed to promote the orthogonalization between convolution kernels and reduce redundant features.The experimental results show that,on the MNIST data set,the compression ratio of the parameters of AlexNet convolutional layer can reach 53.2%,and the calculation amount of the floating point operation can be reduced by 42.8%without losing the test accuracy.In addition,the model has a small error after convergence.
作者
包志强
程萍
黄琼丹
吕少卿
BAO Zhi-qiang;CHENG Ping;HUANG Qiong-dan;LYU Shao-qing(School of Communications and Information Engineering,Xi’an University of Posts and Telecommunications,Xi’an 710121,China)
出处
《计算机与现代化》
2021年第10期107-111,共5页
Computer and Modernization
基金
陕西省重点研发计划项目(2018GY-150)
西安市科技计划项目(201805040YD18CG24-3,2019218114GXRC017CG018-GXYD17.5)。
关键词
卷积神经网络
卷积核
皮尔逊相关系数
模型压缩
正交
convolutional neural network
convolution kernel
Pearson correlation coefficient
model compression
orthogonality