期刊文献+

深度神经网络知识蒸馏综述 被引量:1

A Review of Knowledge Distillation in Deep Neural Networks
下载PDF
导出
摘要 深度神经网络在计算机视觉、自然语言处理、语音识别等多个领域取得了巨大成功,但是随着网络结构的复杂化,神经网络模型需要消耗大量的计算资源和存储空间,严重制约了深度神经网络在资源有限的应用环境和实时在线处理的应用上的发展。因此,需要在尽量不损失模型性能的前提下,对深度神经网络进行压缩。本文介绍了基于知识蒸馏的神经网络模型压缩方法,对深度神经网络知识蒸馏领域的相关代表性工作进行了详细的梳理与总结,并对知识蒸馏未来发展趋势进行展望。 Deep neural networks have achieved great success in computer vision, natural language processing, speech recognition and other fields. However, with the complexity of network structure, the neural network model needs to consume a lot of computing resources and storage space, which seriously restricts the development of deep neural network in the resource limited application environment and real-time online processing application. Therefore, it is necessary to compress the deep neural network without losing the performance of the model as much as possible. This article introduces the neural network model compression method based on knowledge distillation, combs and summarizes the relevant representative works in the field of deep neural network knowledge distillation in detail, and prospects the future development trend of knowledge distillation.
作者 韩宇
出处 《计算机科学与应用》 2020年第9期1625-1630,共6页 Computer Science and Application
关键词 神经网络 深度学习 知识蒸馏 Neural Network Deep Learning Knowledge Distillation
  • 相关文献

参考文献1

二级参考文献2

共引文献56

同被引文献1

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部