摘要
针对目前深度卷积神经网络规模大、计算复杂度高、对存储空间需求大等问题,提出一种基于融合多级注意力迁移的神经网络的压缩方法。该方法基于教师-学生网络结构,设计了新的注意力图融合的方式以及注意力在教师网络与学生网络之间的迁移策略,使学生网络能够学习教师网络中的注意力信息,以此来提升学生网络的准确率。所提出的方法在CIFAR数据集上进行实验,实验结果表明,在学生网络规模和教师网络规模相差一半以上的情况下,准确率仅下降了1.5%~2.5%。
Aimed at the problems of the large scale of deep convolutional neural networks, the high computational complexity and the large demand for storage space, a compression method based on merging multi-level attention transfer neural network is proposed. Based on the teacher-student network structure, this method designed a new attention map merging method and attention transfer strategy between teacher network and student network, so that the student network could learn the attention information in the teacher network, so as to improve the accuracy of the student network. The proposed method was tested on the CIFAR data set. The experimental results show that the accuracy rate has only dropped by 1.5%-2.5%, when the student network scale and the teacher network scale differ by more than half.
作者
李俊杰
彭书华
郭俊伦
Li Junjie;Peng Shuhua;Guo Junlun(School of Automation,Beijing Information Science and Technology University,Beijing 100101,China)
出处
《计算机应用与软件》
北大核心
2023年第1期184-188,共5页
Computer Applications and Software
基金
国家自然科学基金项目(61801032)。
关键词
卷积神经网络
知识迁移
模型压缩
注意力机制
教师网络
学生网络
Convolutional neural network
Knowledge transfer
Model compression
Attention mechanism
Teacher network
Student network