期刊文献+

基于特征加权视觉增强的雷视融合车辆检测方法 被引量:2

A Method for Radar-camera Fusion Vehicle Detection Based on Feature Weighted and Visual Enhancement
原文传递
导出
摘要 为了有效提升高速公路低光照、远距离检测需求条件下的车辆检测准确度,提出了一种基于视觉增强和特征加权的雷视融合车辆目标检测方法。首先从雷视数据层融合入手,基于毫米波雷达对潜在目标空间位置进行表征,并将表征结果用于视觉图像中远距离目标区域划分。其次对该划分区域图像进行重构、检测和还原,以提升远距离目标的视觉检测精度。再次对雷视检测特征层融合进行建模。考虑不同层对特征检测贡献度差异,通过模型训练获取不同特征图的权重参数,并按照权重将不同层特征进行融合计算,以增强目标的特征信息。其次,增加分支网络,利用不同尺寸的卷积层提取特征图中不同的感受野信息,分支输出结果融合获得更强的图像表征能力,达到提高低光照下检测精度的目标。最后,结合特征加权雷视框架和基于毫米波雷达空间预处理的视觉增强思路,基于YOLOv4-tiny框架设计了雷视融合检测网络,完成验证系统搭建。结果表明:在低光照环境下提出的算法与YOLOv4相比,平均精度AP提高了20%,与雷视融合典型算法RVNet相比AP值提高了5%;在针对不同距离下的检测性能测试试验中,本研究算法在检测120m目标时,平均精度值相较YOLOv4算法提高了73%,相较于RVNet提高了63%,提高了智能交通系统车辆检测的覆盖距离和低光照的检测精度。 To improve the improve the vehicle detection accuracy under the condition of low light and long distance detection requirements on expressway,a radar-camera fusion vehicle target detection method based on visual enhancement and feature weighting is proposed.First,starting with the fusion of radar-camera data layer,the spatial location of the potential target based on millimeter-wave radar is characterized,and the characterization result is used for the division of long-distance target areas in visual images.Then,the images of the divided area are reconstructed,detected,and restored to improve the visual detection accuracy of long-distance target.Next,the radar-camera detection feature layer is fused and the modeling is conducted.Considering the difference in the contribution of different layers to feature detection,the weight parameters of different feature maps are obtained through model training,and the features of different layers are fused and calculated according to the weight to enhance the feature information of the target.Next,branch network is added,and different receptive field information in the feature map is extracted by using convolutional layers of different sizes,and the branch output results are fused to obtain stronger image representation ability,which can achieve the goal of improving the detection accuracy in low light.Finally,combining with the featureweighted radar-camera framework and the vision enhancement based on millimeter wave radar spatial preprocessing,the radar-camera fusion detection network based on YOLOv4-tiny is designed and the verification system is built.The result shows that(1)the average precision(AP)of the proposed algorithm in the low-light environment increased by 20%compared with that of YOLOv4,and the AP value increased by 5%compared with that of the radar-camera fusion algorithm RVNet;(2)in the test of detection performance at different distances,when detecting a 120-meter target,the AP value of the proposed algorithm is 73%higher than that of the YOLOv4 algorithm,and 63%higher than that of RVNet,which improved the coverage distance and low-light detection accuracy of the vehicle detection in ITS.
作者 李晓欢 霍科辛 颜晓凤 唐欣 徐韶华 LI Xiao-huan;HUO Ke-xin;YAN Xiao-feng;TANG Xin;XU Shao-hua(Guilin University of Electronic Technology,Guilin Guangxi 541004,China;Guangxi Comprehensive Transportation Big Data Research Institute,Nanning Guangxi 530000,China;Guangxi Transportation Science and Technology Group Co.,Ltd.,Nanning Guangxi 530000,China;Guangxi Beitou IT Innovation Technology Investment Group Co.,Ltd.,Nanning Guangxi 530000,China)
出处 《公路交通科技》 CAS CSCD 北大核心 2023年第2期182-189,共8页 Journal of Highway and Transportation Research and Development
基金 国家自然科学基金项目(61762030) 广西重点研发计划项目(AB21196021,AB21196032) 广西杰出青年基金项目(2019GXNSFFA245007)。
关键词 交通工程 雷视融合 特征加权 视觉增强 深度学习 traffic engineering radar and camera fusion feature weighting visual enhancement deep learning
  • 相关文献

参考文献6

二级参考文献40

共引文献127

同被引文献26

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部