It is crucial to ensure workers wear safety helmets when working at a workplace with a high risk of safety accidents,such as construction sites and mine tunnels.Although existing methods can achieve helmet detection i...It is crucial to ensure workers wear safety helmets when working at a workplace with a high risk of safety accidents,such as construction sites and mine tunnels.Although existing methods can achieve helmet detection in images,their accuracy and speed still need improvements since complex,cluttered,and large-scale scenes of real workplaces cause server occlusion,illumination change,scale variation,and perspective distortion.So,a new safety helmet-wearing detection method based on deep learning is proposed.Firstly,a new multi-scale contextual aggregation module is proposed to aggregate multi-scale feature information globally and highlight the details of concerned objects in the backbone part of the deep neural network.Secondly,a new detection block combining the dilate convolution and attention mechanism is proposed and introduced into the prediction part.This block can effectively extract deep featureswhile retaining information on fine-grained details,such as edges and small objects.Moreover,some newly emerged modules are incorporated into the proposed network to improve safety helmetwearing detection performance further.Extensive experiments on open dataset validate the proposed method.It reaches better performance on helmet-wearing detection and even outperforms the state-of-the-art method.To be more specific,the mAP increases by 3.4%,and the speed increases from17 to 33 fps in comparison with the baseline,You Only Look Once(YOLO)version 5X,and themean average precision increases by 1.0%and the speed increases by 7 fps in comparison with the YOLO version 7.The generalization ability and portability experiment results show that the proposed improvements could serve as a springboard for deep neural network design to improve object detection performance in complex scenarios.展开更多
Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient...Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient detectormodel. The underlying core algorithm of this model adopts the YOLOv5 (YouOnly Look Once version 5) network with the best comprehensive detection performance. It is improved by adding an attention mechanism, a CIoU (CompleteIntersection Over Union) Loss function, and the Mish activation function. First,it applies the attention mechanism in the feature extraction. The network can learnthe weight of each channel independently and enhance the information dissemination between features. Second, it adopts CIoU loss function to achieve accuratebounding box regression. Third, it utilizes Mish activation function to improvedetection accuracy and generalization ability. It builds a safety helmet-wearingdetection data set containing more than 10,000 images collected from the Internetfor preprocessing. On the self-made helmet wearing test data set, the averageaccuracy of the helmet detection of the proposed algorithm is 96.7%, which is1.9% higher than that of the YOLOv5 algorithm. It meets the accuracy requirements of the helmet-wearing detection under construction scenarios.展开更多
To solve problems such as the low detection accuracy of helmet wear-ing,missing detection and poor real-time performance of embedded equipment in the scene of remote and small targets at the construction site,the text...To solve problems such as the low detection accuracy of helmet wear-ing,missing detection and poor real-time performance of embedded equipment in the scene of remote and small targets at the construction site,the text proposes an improved YOLO v5 for small target helmet wearing detection.Based on YOLO v5,the self-attention transformer mechanism and swin transformer module are introduced in the feature fusion step to increase the receptivefield of the con-volution kernel and globally model the high-level semantic feature information extracted from the backbone network to make the model more focused on hel-met feature learning.Replace some convolution operators with lighter and more efficient Involution operators to reduce the number of parameters.The connection mode of the Concat is improved,and 1×1 convolution is added.The experimental results compared with YOLO v5 show that the size of the improved helmet detec-tion model is reduced by 17.8%occupying only 33.2 MB,FPS increased by 5%,and mAP@0.5 reached 94.9%.This approach effectively improves the accuracy of small target helmet wear detection,and meets the deployment requirements for low computational power embedded devices.展开更多
针对目前无法实时准确地监测工人是否佩戴安全帽的问题,提出一种基于YOLOv5s的多场景安全帽佩戴检测算法(YOLOv5s-SDSNR)。首先,在YOLOv5s基础上采用最优传输理论将局部匹配策略改进为全局匹配策略,增加正样本的数量,使模型更有针对性训...针对目前无法实时准确地监测工人是否佩戴安全帽的问题,提出一种基于YOLOv5s的多场景安全帽佩戴检测算法(YOLOv5s-SDSNR)。首先,在YOLOv5s基础上采用最优传输理论将局部匹配策略改进为全局匹配策略,增加正样本的数量,使模型更有针对性训练;然后,利用解耦检测头对分类和定位进行解耦,分别提升分类和定位的准确性;最后,使用结构重参数化将主干网络的训练和推理等效转换,以此来提升特征提取能力和推理速度。实验结果表明,相比原YOLOv5s模型,YOLOv5s-SDSNR的mAP达到97.83%,提升了8.01个百分点,在NVDIA Tesla T4上FPS达到67.77,相较于Faster RCNN、YOLOX,改进的模型更适用于多场景安全帽检测需求。展开更多
针对现有非机动车头盔佩戴检测算法在车流密集场景中存在漏检,对佩戴其他帽子存在误检的问题,提出一种改进YOLOv5s(you only look once version5)的头盔佩戴检测算法YOLOv5s-BC。首先,采用软池化替换特征金字塔池化结构中的最大池化层,...针对现有非机动车头盔佩戴检测算法在车流密集场景中存在漏检,对佩戴其他帽子存在误检的问题,提出一种改进YOLOv5s(you only look once version5)的头盔佩戴检测算法YOLOv5s-BC。首先,采用软池化替换特征金字塔池化结构中的最大池化层,以放大更大强度的特征激活;其次,将坐标注意力机制和加权双向特征金字塔网络结合,搭建一种高效的双向跨尺度连接的加权特征聚合网络,以增强不同层级之间的信息传播;最后,用EIoU损失函数优化边框回归,精确目标定位。实验结果表明:在自制头盔数据集上,改进后的算法的平均精度(mAP)可达98.4%,比原算法提高了6.3%,推理速度达到58.69帧/s,整体性能优于其他主流算法,可满足交通道路环境下头盔佩戴检测的准确率和实时性要求。展开更多
基金supported in part by National Natural Science Foundation of China under Grant No.61772050,Beijing Municipal Natural Science Foundation under Grant No.4242053Key Project of Science and Technology Innovation and Entrepreneurship of TDTEC(No.2022-TD-ZD004).
文摘It is crucial to ensure workers wear safety helmets when working at a workplace with a high risk of safety accidents,such as construction sites and mine tunnels.Although existing methods can achieve helmet detection in images,their accuracy and speed still need improvements since complex,cluttered,and large-scale scenes of real workplaces cause server occlusion,illumination change,scale variation,and perspective distortion.So,a new safety helmet-wearing detection method based on deep learning is proposed.Firstly,a new multi-scale contextual aggregation module is proposed to aggregate multi-scale feature information globally and highlight the details of concerned objects in the backbone part of the deep neural network.Secondly,a new detection block combining the dilate convolution and attention mechanism is proposed and introduced into the prediction part.This block can effectively extract deep featureswhile retaining information on fine-grained details,such as edges and small objects.Moreover,some newly emerged modules are incorporated into the proposed network to improve safety helmetwearing detection performance further.Extensive experiments on open dataset validate the proposed method.It reaches better performance on helmet-wearing detection and even outperforms the state-of-the-art method.To be more specific,the mAP increases by 3.4%,and the speed increases from17 to 33 fps in comparison with the baseline,You Only Look Once(YOLO)version 5X,and themean average precision increases by 1.0%and the speed increases by 7 fps in comparison with the YOLO version 7.The generalization ability and portability experiment results show that the proposed improvements could serve as a springboard for deep neural network design to improve object detection performance in complex scenarios.
基金supported by NARI Technology Development Co.LTD.(No.524608190024).
文摘Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient detectormodel. The underlying core algorithm of this model adopts the YOLOv5 (YouOnly Look Once version 5) network with the best comprehensive detection performance. It is improved by adding an attention mechanism, a CIoU (CompleteIntersection Over Union) Loss function, and the Mish activation function. First,it applies the attention mechanism in the feature extraction. The network can learnthe weight of each channel independently and enhance the information dissemination between features. Second, it adopts CIoU loss function to achieve accuratebounding box regression. Third, it utilizes Mish activation function to improvedetection accuracy and generalization ability. It builds a safety helmet-wearingdetection data set containing more than 10,000 images collected from the Internetfor preprocessing. On the self-made helmet wearing test data set, the averageaccuracy of the helmet detection of the proposed algorithm is 96.7%, which is1.9% higher than that of the YOLOv5 algorithm. It meets the accuracy requirements of the helmet-wearing detection under construction scenarios.
文摘To solve problems such as the low detection accuracy of helmet wear-ing,missing detection and poor real-time performance of embedded equipment in the scene of remote and small targets at the construction site,the text proposes an improved YOLO v5 for small target helmet wearing detection.Based on YOLO v5,the self-attention transformer mechanism and swin transformer module are introduced in the feature fusion step to increase the receptivefield of the con-volution kernel and globally model the high-level semantic feature information extracted from the backbone network to make the model more focused on hel-met feature learning.Replace some convolution operators with lighter and more efficient Involution operators to reduce the number of parameters.The connection mode of the Concat is improved,and 1×1 convolution is added.The experimental results compared with YOLO v5 show that the size of the improved helmet detec-tion model is reduced by 17.8%occupying only 33.2 MB,FPS increased by 5%,and mAP@0.5 reached 94.9%.This approach effectively improves the accuracy of small target helmet wear detection,and meets the deployment requirements for low computational power embedded devices.
文摘针对目前无法实时准确地监测工人是否佩戴安全帽的问题,提出一种基于YOLOv5s的多场景安全帽佩戴检测算法(YOLOv5s-SDSNR)。首先,在YOLOv5s基础上采用最优传输理论将局部匹配策略改进为全局匹配策略,增加正样本的数量,使模型更有针对性训练;然后,利用解耦检测头对分类和定位进行解耦,分别提升分类和定位的准确性;最后,使用结构重参数化将主干网络的训练和推理等效转换,以此来提升特征提取能力和推理速度。实验结果表明,相比原YOLOv5s模型,YOLOv5s-SDSNR的mAP达到97.83%,提升了8.01个百分点,在NVDIA Tesla T4上FPS达到67.77,相较于Faster RCNN、YOLOX,改进的模型更适用于多场景安全帽检测需求。
文摘针对现有非机动车头盔佩戴检测算法在车流密集场景中存在漏检,对佩戴其他帽子存在误检的问题,提出一种改进YOLOv5s(you only look once version5)的头盔佩戴检测算法YOLOv5s-BC。首先,采用软池化替换特征金字塔池化结构中的最大池化层,以放大更大强度的特征激活;其次,将坐标注意力机制和加权双向特征金字塔网络结合,搭建一种高效的双向跨尺度连接的加权特征聚合网络,以增强不同层级之间的信息传播;最后,用EIoU损失函数优化边框回归,精确目标定位。实验结果表明:在自制头盔数据集上,改进后的算法的平均精度(mAP)可达98.4%,比原算法提高了6.3%,推理速度达到58.69帧/s,整体性能优于其他主流算法,可满足交通道路环境下头盔佩戴检测的准确率和实时性要求。