期刊文献+

深度神经网络模型压缩综述 被引量:16

Survey of Deep Neural Networks Model Compression
下载PDF
导出
摘要 近年来,随着深度学习的飞速发展,深度神经网络受到了越来越多的关注,在许多应用领域取得了显著效果。通常,在较高的计算量下,深度神经网络的学习能力随着网络层深度的增加而不断提高,因此深度神经网络在大型数据集上的表现非常卓越。然而,由于其计算量大、存储成本高、模型复杂等特性,使得深度学习无法有效地应用于轻量级移动便携设备。因此,压缩、优化深度学习模型成为目前研究的热点。当前主要的模型压缩方法有模型裁剪、轻量级网络设计、知识蒸馏、量化、体系结构搜索等。对以上方法的性能、优缺点和最新研究成果进行了分析总结,并对未来研究方向进行了展望。 In recent years,the deep neural networks have gained more and more attention with the rapid development of deep learning.It has achieved remarkable effect in many application fields.Usually,at a higher computation,the learning ability of deep neural networks is improved with the increase of depth,which makes the performance of deep learning on large datasets especially successful.However,the deep learning can􀆳t be effectively applied to the lightweight mobile portable device due to the characteristics of large amount of calculation,high storage cost and complicated model.Therefore,compressing and simplifying the deep learning model has become the research hot spot.Currently,the main model compression methods include pruning,lightweight network design,knowledge distillation,quantization,neural architecture search,etc.This paper analyses and summarizes the performance,advantages and limitations and the latest research results of the model compression methods,and prospects the future research direction.
作者 耿丽丽 牛保宁 GENG Lili;NIU Baoning(College of Information and Computer,Taiyuan University of Technology,Taiyuan 030024,China;Experimental Center,Shanxi University of Finance and Economics,Taiyuan 030006,China)
出处 《计算机科学与探索》 CSCD 北大核心 2020年第9期1441-1455,共15页 Journal of Frontiers of Computer Science and Technology
基金 国家重点研发计划(No.2017YFB1401000) 山西省重点研发计划(No.201903D421007)。
关键词 深度学习 模型压缩 神经网络 deep learning model compression neural networks
  • 相关文献

参考文献4

二级参考文献4

共引文献38

同被引文献96

引证文献16

二级引证文献79

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部