期刊文献+

施工现场安全帽佩戴检测方法研究

Research on detection method of helmet wearing in construction site
下载PDF
导出
摘要 在施工现场佩戴安全帽是建筑行业的基本工作规范。由于施工环境复杂,图片中包含的目标多而杂乱,且安全帽在图片中所占比例较小,安全帽佩戴的检测难度较大,导致漏检误检率较高,检测精度不是很高。为改善上述安全帽佩戴检测的缺点,提出了一种安全帽佩戴检测网络——融合多尺度特征和注意力的网络(MFFMA-Net),该网络以YOLOX为基础架构,用一个新设计的多尺度特征融合模块(Multi-ScaleFeatureFusionModule,MFFM)替换了原有的路径聚合网络(PathAggregationNetwork,PANET)。MFFM依照特征图尺度从两个相反的方向进行特征融合,这样的融合方式能够包含更加丰富的骨干网络提取的特征,可提高安全帽佩戴的检测精度。新增了一个注意力模块——融合空间信息的注意力(FSI-ECA),以全像素信息为基础,将空间信息集中到通道信息中,在学习过程中充分利用通道信息和空间信息,能够更多地使用安全帽的特征信息,减小安全帽佩戴检测的漏检误检率。在SHWD数据集上进行实验,MFFMA-Net的平均精度均值达到96.61%,较YOLOX提高了0.47%,召回率相较于YOLOX提高了约1%,检测速度可以达到38帧/s。在达到实时检测的条件下实现了较高的检测精度,并且减小了漏检误检率。 Wearing a helmet at construction sites is a basic work norm in the construction industry in our country.Due to the complexity of the construction environment,the object in the pictures are numerous and chaotic,and the helmets occupy a relatively small proportion in the pictures.This makes it difficult to detect the wearing of helmets,resulting in a high rate of missed detection and false detection,and the detection precision is not very high.To improve the shortcomings of helmet detection,a helmet detection network called MFFMA-Net is proposed.This network is based on the YOLOX architecture and replaces the original PANET with a newly designed Multi-Scale Feature Fusion Module(MFFM).MFFM fuses features from two opposite directions based on feature map scales,which can include richer features extracted by the backbone network and improve the detection precision of helmet wearing.An attention module called FSI-ECA is added,which focuses spatial information into channel information based on full-pixel information,and fully utilizes channel and spatial information during the learning process.This module can fully utilize the feature information of helmets and reduce the missed detection and false detection rate of safety helmet wearing.Experimental results on the SHWD dataset show that the mAP of MFFMA-Net reaches 96.61%,which is 0.47%higher than YOLOX,and the Recall is improved byapproximately 1%compared to YOLOX.The detection speed can reach 38 frames per second.It achieves high detection precision under real-time detection conditions and reduces the rate of missed detection and false detection.
作者 师盼武 冯百明 丁洪文 SHI Panwu;FENG Baiming;DING Hongwen(College of Computer Science and Engineering,Northwest Normal University,Lanzhou 730070,China)
出处 《微电子学与计算机》 2024年第10期45-54,共10页 Microelectronics & Computer
基金 国家自然科学基金(20967031)。
关键词 目标检测 YOLOX 安全帽 实时 注意力 object detection YOLOX helmet real-time attention
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部