摘要
针对车牌识别算法参数量大、实时性差,且在硬件水平较低的生产场景下应用效果不佳等问题,本文提出一种基于双教师知识蒸馏的方法,有效提高了轻量级网络模型的性能。设计一个特征融合模块,提取出聚合多尺度信息且更具鲁棒性的特征,使网络可以挖掘更丰富的语义信息,采用注意力机制,自适应的指导学生网络,教师知识得以更好地反哺给学生,帮助学生网络高效学习。实验结果表明,与传统的轻量级算法相比,本算法在储存资源有限及硬件水平低的生产场景中应用前景广阔。
In response to the problems of large algorithm parameters,poor real-time performance,and poor application effects in low hardware production scenarios for license plate recognition(LPR),this paper proposes a method based on double teacher knowledge distillation(KD),which effectively improves the performance of lightweight network models.A feature fusion module is designed to extract more robust features that aggregate multi-scale information,enabling the network to mine richer semantic information.An attention mechanism is used to adaptively guide the student network,enabling better feedback of teacher knowledge to help the student network learn more efficiently.Experimental results show that compared to traditional lightweight algorithms,our algorithm has more advantages in scenarios with limited storage resources and low hardware levels in production.
作者
张迪
王国栋
王永
刘瑞
ZHANG Di;WANG Guodong;WANG Yong;LIU Rui(College of Computer Science&Technology,Qingdao University,Qingdao 266071,China;Songli Holding Group Co.,Ltd,Qingdao 266073,China)
出处
《青岛大学学报(工程技术版)》
CAS
2023年第3期16-22,共7页
Journal of Qingdao University(Engineering & Technology Edition)
基金
山东省自然科学基金资助项目(ZR2019MF050)
山东省高等学校优秀青年创新团队支持计划(2020KJN011)。
关键词
图像处理
卷积神经网络
知识蒸馏
特征融合
image processing
convolutional neural network
knowledge distillation
feature fusion