期刊文献+

复杂作业场景下的反光衣和安全帽检测方法

Method for detecting reflective vests and safety helmets in complex operational environments
下载PDF
导出
摘要 针对现有算法在复杂的工地环境中进行反光衣和安全帽检测时存在的无法有效区分目标和背景的微小差异问题,提出了一种改进YOLOX的反光衣和安全帽检测算法。首先,将主干网络中空间金字塔池化中的最大池化替换为平均池化,减少特征图的信息损失和过拟合风险;其次,设计一种带权注意力模块(Weighted Convolutional Block Attention Module,W-CBAM)嵌入特征融合层,通过权重系数提升对特征图空间维度的关注,增强特征图的表达能力;最后,添加自适应特征融合(Adaptively Spatial Feature Fusion,ASFF)模块,解决多尺度特征融合时存在的不一致性问题。在扩充后的公开反光衣安全帽数据集的试验结果表明,所提算法精度高达98.79%,优于原始的YOLOX算法和其他先进算法,同时具有较快的检测速度,满足施工环境检测需求。 In response to the limitations of existing reflective vest and safety helmet detection algorithms in complex site environments,such as low detection efficiency,poor accuracy,and difficulty in effectively distinguishing small differences between the target and the background,this paper proposes an enhanced algorithm based on YOLOX.Firstly,the spatial pyramid pooling in the backbone network now utilizes average pooling instead of maximum pooling.This adjustment aims to eliminate the influence of local maxima,reduce information loss,and mitigate the risk of overfitting in the feature map.Secondly,a Weighted Convolutional Block Attention Module(W-CBAM)has been developed and integrated into the feature fusion layer.This module enhances spatial dimension expression in the feature map by leveraging weight coefficients,emphasizing target region features,and guiding the network to focus more on the target being detected to enhance detection accuracy.Finally,an Adaptively Spatial Feature Fusion(ASFF)module has been incorporated to dynamically merge feature maps of varying scales.This addition effectively captures target feature information across different scales,boosting the model's ability to perceive and represent the target accurately.The study conducted experiments on an augmented public dataset for reflective vests and safety helmets,incorporating data enhancement techniques like image flipping and noise addition.The outcomes reveal that the enhanced algorithm achieves a mean average precision of 98.79%,with precision and recall rates of 98.72%and 94.63%respectively.The algorithm substantially reduces misdetections and misjudgments,outperforming not only the original YOLOX algorithm but also other state-of-the-art algorithms.Simultaneously,the algorithm achieves a detection speed of 68.47 frames per second,enabling real-time detection with high accuracy.The method presented in this study effectively addresses the issue of information loss during maximum pooling in the feature map,enhancing feature map expression and demonstrating precise and efficient performance on high-quality datasets with abundant samples.It adequately fulfills the detection requirements in construction environments and exhibits promising application potential.
作者 谢国波 肖峰 林志毅 谢建辉 吴陈锋 XIE Guobo;XIAO Feng;LIN Zhiyi;XIE Jianhui;WU Chenfeng(College of Computer Science,Guangdong University of Technology,Guangzhou 510006,China)
出处 《安全与环境学报》 CAS CSCD 北大核心 2024年第9期3513-3521,共9页 Journal of Safety and Environment
基金 国家自然科学基金项目(61802072) 广东电网有限责任公司科技项目(GDKJXM20230718)。
关键词 安全工程 反光衣检测 安全帽检测 YOLOX 注意力模块 自适应特征融合 safety engineering reflective vest detection safety helmet detection YOLOX attention module adaptive spatial feature fusion
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部