摘要
为了解决现有算法对工地场景下安全帽和安全背心检测存在的漏检及小目标检测效果差的问题,设计与实现一种基于改进YOLOv7的安全帽及背心检测算法,记为Eff-YOLOv7。首先算法在特征提取阶段中使用EfficientViT Module来强化图像的信息提取;在特征融合阶段提出一种融合全局特征与局部信息的特征融合提取块,使得中间层特征图能更好地结合上下文信息;提出一种新的目标边框定位损失ICIoU Loss,更精细地计算预测目标框与真实目标框之间地位置偏差。为了验证算法的有效性,以通用目标检测数据集MS COCO和自建安全帽和安全背心数据集为基础,大量的实验表明改进算法Eff-YOLOv7在两个数据集上的多个指标如mAP0.5和mAP0.5:0.95上相较于YOLOv7算法均有大幅提升,为高精度安全帽和安全背心检测提供了一种新的方法。
In order to solve the problems of missed and poor detection of small objects in the ex-isting algorithms for safety helmet and vest detection in construction site sccnes,this paper pro-poses an algorithm based on the improved YOLOv7 for safety helmet and vest detection,deno-ted as Eff-YOLOv7.Firstly,the proposed algorithm utilizes EfficicntViT Module in the feature extraction stage to strengthen information extraction of the image;Secondly,in the feature fu-sion stage,a feature fusion extraction block is proposed to intergrate the global features with the local information,so that the intermediate feature maps can be better combined with the contex-tual information;Finally,the ICloU Loss is proposed to calculate the positional deviation be-twccn the prcdicted boxes and the ground-truth in a more finc-grained way.In order to verify the effectiveness of the proposed algorithm,based on the benchmark MS COCO dataset and the self-constructed safety helmet and vest dataset,numerous experiments show that the Eff-YOLOv7 has a substantial improvement in several indexes,such as mAPO.5 and mAPO.5:0.95,on both datasets compared to the YOLOv7,which provides a new method for high-precision safety helmet and vest detection task.
作者
李文生
LI Wensheng(School of Computer and Infomation Science,Three Gorges University,Yichang,Hubei 443000)
出处
《长江信息通信》
2024年第5期5-9,32,共6页
Changjiang Information & Communications