Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means...Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means of reducing the risk of accidents.To tackle the challenges posed by the low recognition accuracy and the substan-tial computational burden associated with current infrared pedestrian-vehicle detection methods,an infrared pedestrian-vehicle detection method A proposal is presented,based on an enhanced version of You Only Look Once version 5(YOLOv5).First,A head specifically designed for detecting small targets has been integrated into the model to make full use of shallow feature information to enhance the accuracy in detecting small targets.Second,the Focal Generalized Intersection over Union(GIoU)is employed as an alternative to the original loss function to address issues related to target overlap and category imbalance.Third,the distribution shift convolution optimization feature extraction operator is used to alleviate the computational burden of the model without significantly compromising detection accuracy.The test results of the improved algorithm show that its average accuracy(mAP)reaches 90.1%.Specifically,the Giga Floating Point Operations Per second(GFLOPs)of the improved algorithm is only 9.1.In contrast,the improved algorithms outperformed the other algorithms on similar GFLOPs,such as YOLOv6n(11.9),YOLOv8n(8.7),YOLOv7t(13.2)and YOLOv5s(16.0).The mAPs that are 4.4%,3%,3.5%,and 1.7%greater than those of these algorithms show that the improved algorithm achieves higher accuracy in target detection tasks under similar computational resource overhead.On the other hand,compared with other algorithms such as YOLOv8l(91.1%),YOLOv6l(89.5%),YOLOv7(90.8%),and YOLOv3(90.1%),the improved algorithm needs only 5.5%,2.3%,8.6%,and 2.3%,respectively,of the GFLOPs.The improved algorithm has shown significant advancements in balancing accuracy and computational efficiency,making it promising for practical use in resource-limited scenarios.展开更多
文摘针对船舶实时性检测中出现的检测精度低、漏检问题,改进一种基于YOLOv3-Tiny的船舶目标检测算法。通过引入深度可分离卷积作为主干网络,提高通道数量,减少模型的参数量和运算量;采用H-Swish和Leaky ReLU激活函数改进卷积结构,提取更多特征信息;利用GIOU(Generalized Intersection Over Union)损失优化边界框,突显目标区域重合度,提高精度。在混合船舶数据集上检测结果表明,改进后YOLOv3-Tiny的检测精度为83.40%,较原算法提高5.33百分点,召回率和检测速度也均优于原算法,适用于船舶实时性检测。
文摘Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means of reducing the risk of accidents.To tackle the challenges posed by the low recognition accuracy and the substan-tial computational burden associated with current infrared pedestrian-vehicle detection methods,an infrared pedestrian-vehicle detection method A proposal is presented,based on an enhanced version of You Only Look Once version 5(YOLOv5).First,A head specifically designed for detecting small targets has been integrated into the model to make full use of shallow feature information to enhance the accuracy in detecting small targets.Second,the Focal Generalized Intersection over Union(GIoU)is employed as an alternative to the original loss function to address issues related to target overlap and category imbalance.Third,the distribution shift convolution optimization feature extraction operator is used to alleviate the computational burden of the model without significantly compromising detection accuracy.The test results of the improved algorithm show that its average accuracy(mAP)reaches 90.1%.Specifically,the Giga Floating Point Operations Per second(GFLOPs)of the improved algorithm is only 9.1.In contrast,the improved algorithms outperformed the other algorithms on similar GFLOPs,such as YOLOv6n(11.9),YOLOv8n(8.7),YOLOv7t(13.2)and YOLOv5s(16.0).The mAPs that are 4.4%,3%,3.5%,and 1.7%greater than those of these algorithms show that the improved algorithm achieves higher accuracy in target detection tasks under similar computational resource overhead.On the other hand,compared with other algorithms such as YOLOv8l(91.1%),YOLOv6l(89.5%),YOLOv7(90.8%),and YOLOv3(90.1%),the improved algorithm needs only 5.5%,2.3%,8.6%,and 2.3%,respectively,of the GFLOPs.The improved algorithm has shown significant advancements in balancing accuracy and computational efficiency,making it promising for practical use in resource-limited scenarios.