期刊文献+

基于YOLOv8的交通信号灯识别 被引量:1

Traffic Signals Recognition Based on YOLOv8
下载PDF
导出
摘要 交通信号灯的识别对于辅助驾驶系统是至关重要的,它可以帮助减少事故和提高行车安全。本文提出了基于YOLOv8的交通信号灯标志识别方法,该方法包括数据集的构建、模型的训练、自然场景测试三个主要部分。首先,通过网络公开的交通信号灯数据集进行标注,使用YOLOv8算法框架对数据集进行训练,得出最优模型。最后,在真实道路场景中对训练好的模型进行了测试,得到了较为准确的结果。通过实验对比,我们发现YOLOv8训练后的模型性能优异,在保证精度的情况下提高检测速度,还可以解决目标部分遮挡和小尺寸目标检测等问题,从而提高了识别的准确性和效率。在辅助驾驶系统中应用该方法可以更加精确地判断箭头指向性信号灯和全屏型信号灯,帮助提高车辆在路面上的运动安全性和稳定性。目前的大多方法仅仅针对于交通信号灯的颜色以及整体交通信号灯位置进行判断识别,本文会更细化交通灯上各式各样的方向标志颜色做出分类识别,通过YOLOv8算法在减少参数的情况下还能够大幅度减少计算资源,通过实验结果表明,迭代200轮后的模型mAP50-95便达到了82.6%,FPS达到了27.2帧/毫秒。 The recognition of traffic lights is crucial for driver assistance systems, which can help reduce accidents and improve driving safety. This paper proposes a traffic light sign recognition method based on YOLOv8, which includes three main parts: data set construction, model training and natural scene testing. First, the traffic light data set disclosed through the network was annotated, the YOLOv8 algorithm framework was used to train the data set, and the optimal model was obtained. Finally, the trained model is tested in a real road scene, and the results are more accurate. Through experimental comparison, we found that the model trained by YOLOv8 has excellent performance, improves detection speed while ensuring accuracy, and can also solve problems such as partial occlusion of the target and small-size target detection, thus improving the accuracy and efficiency of recognition. The application of this method in the auxiliary driving system can judge the arrow directional signal light and the full-screen signal light more accurately, and help to improve the safety and stability of the vehicle on the road. Most of the current methods only judge and identify the color of traffic lights and the overall position of traffic lights. This paper will further refine the classification and recognition of various direction sign colors on traffic lights. The YOLOv8 algorithm can greatly reduce computing resources while reducing parameters. Experimental results show that after 200 iterations, the model achieves a mAP50-95 of 82.6% and an FPS of 27.2 frames per millisecond.
作者 赵恩兴 王超
机构地区 宿州学院 同济大学
出处 《人工智能与机器人研究》 2023年第3期246-254,共9页 Artificial Intelligence and Robotics Research
  • 相关文献

参考文献4

二级参考文献17

共引文献89

同被引文献5

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部