期刊文献+

基于知识蒸馏的分布式神经网络设计

Design of Distributed Neural Network Based on Knowledge Distillation
下载PDF
导出
摘要 现有关于神经网络推理过程中的分布式研究多是将神经网络运算的不同阶段分布在多个设备之间。运算中多个设备并未同时计算,资源利用率较低。由于依赖多个设备,其容错率较低。针对此问题,本文提出一种分布式神经网络的设计方法。使用网络剪枝将一个训练好的神经网络模型分解为多个不同准确率的子模型,通过知识蒸馏将多个子模型的知识传递给多个学生模型,使得多个学生模型在推理阶段可以协同工作。实验结果表明,基于知识蒸馏得到的模型可以单机运行亦可以多个设备协同工作,协同工作时有较高的准确率。基于知识蒸馏的分布式神经网络可以多个设备协同运行,在受到干扰等无网络的情况下单机运行的准确率在可接受范围内,有较高的容错率。适用于无人机、自动驾驶等方面。 Most of the existing distributed research on neural network reasoning process is to distribute the different stages of neural network operation among multiple devices.Multiple devices are not calculated at the same time,so the resource utilization is low.Due to the dependence on multiple devices,its fault tolerance rate is low.To solve this problem,this paper proposes a design method of distributed neural network.By decomposing the knowledge distillation model into several sub models,students can transfer the knowledge distillation model to multiple sub models.The experimental results show that the model based on knowledge distillation can run on a single machine or with multiple devices working together,and has high accuracy.The distributed neural network based on knowledge distillation can run with multiple devices cooperatively,and the accuracy of single machine operation is acceptable under the condition of no network,such as interference,and has a high fault tolerance rate.It is suitable for UAV and autopilot.
作者 郑宗新 ZHENG Zongxin(School of Computer Science,Chongqing Normal University,Chongqing 401331)
出处 《现代计算机》 2021年第14期70-73,78,共5页 Modern Computer
关键词 分布式神经网络 知识蒸馏 深度学习 网络剪枝 Distributed Neural Network Knowledge Distillation Deep Learning Network Pruning
  • 相关文献

参考文献2

二级参考文献5

共引文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部