Positioning and mapping technology is a difficult and hot topic in autonomous driving environment sensing systems.In a complex traffic environment,the signal of the Global Navigation Satellite System(GNSS)will be bloc...Positioning and mapping technology is a difficult and hot topic in autonomous driving environment sensing systems.In a complex traffic environment,the signal of the Global Navigation Satellite System(GNSS)will be blocked,leading to inaccurate vehicle positioning.To ensure the security of automatic electric campus vehicles,this study is based on the Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain(LEGO-LOAM)algorithm with a monocular vision system added.An algorithm framework based on Lidar-IMU-Camera(Lidar means light detection and ranging)fusion was proposed.A lightweight monocular vision odometer model was used,and the LEGO-LOAM system was employed to initialize monocular vision.The visual odometer information was taken as the initial value of the laser odometer.At the back-end opti9mization phase error state,the Kalman filtering fusion algorithm was employed to fuse the visual odometer and LEGO-LOAM system for positioning.The visual word bag model was applied to perform loopback detection.Taking the test results into account,the laser radar loopback detection was further optimized,reducing the accumulated positioning error.The real car experiment results showed that our algorithm could improve the mapping quality and positioning accuracy in the campus environment.The Lidar-IMU-Camera algorithm framework was verified on the Hong Kong city dataset UrbanNav.Compared with the LEGO-LOAM algorithm,the results show that the proposed algorithm can effectively reduce map drift,improve map resolution,and output more accurate driving trajectory information.展开更多
针对视觉和激光耦合simultaneous localization and mapping(SLAM)中存在的视觉特征丢失、雷达闭环轨迹矢量漂移和高程位姿偏差问题,提出一种通过扫描上下文回环检测的紧密耦合视觉和激光雷达SLAM方法。采用基于SIFT、ORB特征点检测器...针对视觉和激光耦合simultaneous localization and mapping(SLAM)中存在的视觉特征丢失、雷达闭环轨迹矢量漂移和高程位姿偏差问题,提出一种通过扫描上下文回环检测的紧密耦合视觉和激光雷达SLAM方法。采用基于SIFT、ORB特征点检测器的视觉里程计解决特征点丢失和匹配失败问题;通过雷达里程计融合视觉里程计帧间估计消除雷达点云畸变和大幅度漂移;通过扫描上下文进行回环检测,引入因子图优化里程计矢量漂移,消除回环检测失败问题。在多个KITTI数据集中对所提算法进行验证,并与经典算法进行对比,实验结果表明,该算法具有高稳定性、较强鲁棒性、低漂移和高精度。展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.51975088 and 51975089).
文摘Positioning and mapping technology is a difficult and hot topic in autonomous driving environment sensing systems.In a complex traffic environment,the signal of the Global Navigation Satellite System(GNSS)will be blocked,leading to inaccurate vehicle positioning.To ensure the security of automatic electric campus vehicles,this study is based on the Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain(LEGO-LOAM)algorithm with a monocular vision system added.An algorithm framework based on Lidar-IMU-Camera(Lidar means light detection and ranging)fusion was proposed.A lightweight monocular vision odometer model was used,and the LEGO-LOAM system was employed to initialize monocular vision.The visual odometer information was taken as the initial value of the laser odometer.At the back-end opti9mization phase error state,the Kalman filtering fusion algorithm was employed to fuse the visual odometer and LEGO-LOAM system for positioning.The visual word bag model was applied to perform loopback detection.Taking the test results into account,the laser radar loopback detection was further optimized,reducing the accumulated positioning error.The real car experiment results showed that our algorithm could improve the mapping quality and positioning accuracy in the campus environment.The Lidar-IMU-Camera algorithm framework was verified on the Hong Kong city dataset UrbanNav.Compared with the LEGO-LOAM algorithm,the results show that the proposed algorithm can effectively reduce map drift,improve map resolution,and output more accurate driving trajectory information.
文摘针对视觉和激光耦合simultaneous localization and mapping(SLAM)中存在的视觉特征丢失、雷达闭环轨迹矢量漂移和高程位姿偏差问题,提出一种通过扫描上下文回环检测的紧密耦合视觉和激光雷达SLAM方法。采用基于SIFT、ORB特征点检测器的视觉里程计解决特征点丢失和匹配失败问题;通过雷达里程计融合视觉里程计帧间估计消除雷达点云畸变和大幅度漂移;通过扫描上下文进行回环检测,引入因子图优化里程计矢量漂移,消除回环检测失败问题。在多个KITTI数据集中对所提算法进行验证,并与经典算法进行对比,实验结果表明,该算法具有高稳定性、较强鲁棒性、低漂移和高精度。