摘要
基于深度学习实现施工现场人员是否佩戴安全帽的检测方法因卷积神经网络层数多、结构复杂、计算量庞大,难以在嵌入式平台上实现实时检测。针对该问题,提出一种基于改进YOLOv4-Tiny的轻量化网络算法。该算法首先通过改进特征提取网络进一步融合多尺度特征信息以提高对小目标区域的识别能力;其次通过引入EIOU损失函数提高定位精确度以及模型收敛速度;最后采用聚类算法K-means++提取先验框中心点,选取更为合适的先验框,用于提高检测的精度及速度。实验结果表明,采用改进后的算法在嵌入式平台上进行安全帽佩戴检测,均值平均精度达到92.47%,较YOLOv4-Tiny提高了12.91%,实现了每秒20.16帧的实时检测速度,达到了实时检测的要求。
It is difficult for the current method of implementing helmet detection through deep learning to achieve real-time detection on an embedded platform due to the large number of network layers, complex structure, and huge amount of calculation.To solve this problem, a lightweight network algorithm based on improved YOLOv4-Tiny is proposed.The algorithm first improves the recognition ability of small target regions by improving the feature extraction network and further fusing multi-scale feature information.Secondly, it improves the positioning accuracy and model convergence speed by introducing the EIOU loss function.Finally, the clustering algorithm K-means++ is used to extract the priors.For the center point of the frame, a more appropriate prior frame is selected to improve the accuracy and speed of detection.The experimental results show that the improved algorithm is used to detect the helmet wearing on the embedded platform, and the average accuracy of the average value reaches 92.47%,which is 12.91% higher than the original algorithm, achieves a real-time detection speed of 20.16 frames per second, and meets the requirements for real-time detection.
作者
李发伯
刘猛
景晓琦
LI Fabo;LIU Meng;JING Xiaoqi(Shenyang Ligong University,Shenyang 110159,China)
出处
《沈阳理工大学学报》
CAS
2022年第6期6-12,共7页
Journal of Shenyang Ligong University
基金
国家自然科学基金项目(62102272)。
关键词
深度学习
实时目标检测
嵌入式平台
多尺度特征
deep learning
real-time target detection
embedded platform
multi-scale features