期刊文献+

低可见度环境下基于同步定位与构图的无人驾驶汽车定位算法 被引量:3

Driverless vehicle positioning algorithm based on simultaneous positioning and mapping in low-visibility environment
原文传递
导出
摘要 为在大范围低可见度环境下实现无人驾驶汽车的高精度定位,基于VINS-Mono算法的系统框架,在系统的前端与后端分别增添了RFAST弱光图像增强模块与VG融合定位模块,提出了一种融合定位算法LVG_SLAM;RFAST弱光图像增强模块采用小波变换将原始输入图像的细节信息与亮度信息分离,对于包含原始图像噪声的细节信息通过统一阈值和均值滤波2种方式实现噪声抑制,并利用双边纹理滤波算法进行细节增强,在此基础上,根据多尺度Retinex算法增强图像的对比度,提高低可见度环境下角点提取的成功率,从而保证图像跟踪的稳定性,改善定位算法的鲁棒性;基于无迹卡尔曼滤波算法,VG融合定位模块将GNSS定位信息与惯性导航测量信息进行松耦合,融合定位结果作为约束引入VI-SLAM后端,通过联合非线性优化的方式减少累积误差对算法定位精度的影响。计算结果表明:相较于VINS-Mono算法,改进的LVG_SLAM融合定位算法在EuRoC与Kitti公开数据集上表现更加出色,均方根误差分别降低了38.76%与58.39%,运动轨迹更贴近真实轨迹;在实际夜晚道路场景下,LVG_SLAM算法将定位误差控制在一定范围内,顺利检测到闭环使得定位表现得到大幅改善,均方根误差、平均误差、最大误差、中位数误差分别降低了79.61%、82.50%、71.31%、83.77%,与VINS-Mono算法相比,在定位精度与鲁棒性方面具有明显的优势。 In order to achieve high-precision positioning for driverless vehicles in a large-scale and low-light environment, a fused positioning algorithm LVG_SLAM was proposed based on the system framework of the VINS-Mono algorithm. In LVG_SLAM, a RFAST low-light image enhancement module and a VG fusion positioning module were proposed and then added. The RFAST low-light image enhancement module applied a wavelet transform to separate the detailed information from the brightness information. In the RFAST module, the unified threshold and mean filter were applied to filter the detailed noisy information from the oringinal image while the bilateral texture filter algorithm was applied to enhance the detail information. After that, the multi-scale retinex algorithm was proposed to further enhance the contrast of the image to improve the success rate of corner extraction in a low-light environment, benefit from which, both the stability of image tracking and the robustness of the positioning algorithm were improved. Using an unscented Kalman filter(UKF) algorithm, the VG fusion positioning module loosely fused the positioning information from both the global navigation satellite system(GNSS) and the inertial navigation equipment. The fused positioning result was introduced as a constraint into the back end of the LVG_SLAM algorithm, benefit from which, the influence of cumulative error on the positioning accuracy of the algorithm was reduced by a joint nonlinear optimization. Analysis results show that compared with the VINS-Mono algorithm, the LVG_SLAM algorithm performs better on the EuRoC and Kitti public datasets, and the root mean square error reduces by 38.76% and 58.39%, respectively, so that the motion trajectory estimated by the LVG_SLAM algorithm is closer to the real trajectory. In an experiment of night road scene, the LVG_SLAM algorithm successfully constrains the positioning error into a certain range, and detects the closed loop, which greatly improves the positioning performance. The root mean square error, average error, maximum error, and median error reduce by 79.61%, 82.50%, 71.31%, and 83.77%, respectively. Compared with the VINS-Mono algorithm, the proposed LVG_SLAM algorithm has obvious advantages in both positioning accuracy and robustness. 4 tabs, 12 figs, 26 refs.
作者 高扬 曹王欣 夏洪垚 赵亦辉 GAO Yang;CAO Wang-xin;XIA Hong-yao;ZHAO Yi-hui(School of Automobile,Chang'an University,Xi'an 710064,Shaanxi,China;Xi'an Coal Mining Machinery Co.,Ltd.,Xi'an 710200,Shaanxi,China)
出处 《交通运输工程学报》 EI CSCD 北大核心 2022年第3期251-262,共12页 Journal of Traffic and Transportation Engineering
基金 国家重点研发计划(2019YFB1600100) 陕西省自然科学基金项目(2019JLP-07)。
关键词 智能交通 环境感知 同步定位与构图 弱光图像增强 噪声抑制 融合定位 intelligent transportation environmental perception simultaneous localization and mapping low-light image enhancement noise suppression fusion positioning
  • 相关文献

参考文献2

二级参考文献96

  • 1H P Moravec.Towards automatic visual obstacle avoidance. International Joint Conference on Artificial intelligence[C]. Tokyo:1977.584.
  • 2H P Moravec.Visual mapping by a robot rover. International Joint Conference on Artificial Intelligence[C]. USA:1979.598-600.
  • 3C Harris.M Stephens.A combined corner and edge detector. Alvey Vision Conference[C]. UK,University Manchester,1988.147-151.
  • 4J A Noble.Finding corners[J]. Image Vis.Computer.1988,6(2):121-128.
  • 5W H Brady.Real-time corner detection algorithm for motion estimation[J]. Image and Vision Computing,1995,13(9):168-181.
  • 6Zheng Zhiqiang.Wang Han.Analysis of gray level corner detection[J]. Pattern Recognition Letters.1999,20(6):149-162.
  • 7J B Ryu,C G Lee,et al.Formula for Harris corner detector[J]. Electronic Letters,2011,47(3):5-6.
  • 8J B Ryu,H H Park.Log-log scale Harris corner detector[J]. Electronic Letters,2010,46(24):21-22.
  • 9S Ando.Image field categorization and edge/corner detection from gradient covariance[J]. IEEE Trans PAMI,2000,22(2):179-190.
  • 10P Mainali,Q Yang.Robust low complexity corner detector[J]. IEEE Trans Circuits and Systems for Video Technology,2011,21(4):435-446.

共引文献71

同被引文献19

引证文献3

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部