摘要
同时定位与建图(SLAM)是自动驾驶的基本要求之一。多传感器融合,尤其是激光雷达和相机的融合,对于自动驾驶来说是必不可少的。其中,如何针对各种场景调整不同传感器的置信度是关键问题,基于此,提出一种自适应紧耦合的激光雷达相机融合的SLAM(AVLS)算法。首先,所提AVLS算法建立在基于滑动窗口的因子图上,包含提升整体算法精度和鲁棒性的灵活深度关联和弹性初始化等模块。其次,为了充分探索激光雷达和相机在不同环境中的性能,采用一种基于先验知识的动态加权方案。最后,将所提AVLS算法在两个公开的大规模自动驾驶数据集上进行了全面实验,包括与经典算法的对比及消融实验,实验结果表明,AVLS算法的鲁棒性和精确度可以达到目前领先水平。
Simultaneous localization and mapping(SLAM) is one of the basic requirements of autonomous driving.Furthermore,multisensor fusion,more particularly,the fusion of lidar and camera,is essential for autonomous driving,and understanding how to adjust the weights of different sensors for various scenarios is a critical challenge.Therefore,an adaptive tightly coupled lidar-visual SLAM(AVLS) algorithm is proposed.First,AVLS is built on a factor graph based on sliding windows,including modules such as flexible depth correlation and elastic initialization that improve the accuracy and robustness of the overall algorithm.Second,in order to fully explore the performance of lidars and cameras in different environments,a dynamic weighting scheme based on prior knowledge is adopted.Finally,comprehensive experiments are conducted on the proposed AVLS algorithm on two publicly available large-scale autonomous driving datasets,including comparisons with classical algorithms and ablation experiments.The experimental results show that the robustness and accuracy of the AVLS achieves state-of-the-art performance.
作者
周维超
黄俊
Zhou Weichao;Huang Jun(Shanghai Advanced Research Institute,Chinese Academy of Sciences,Shanghai 201210,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处
《激光与光电子学进展》
CSCD
北大核心
2023年第20期235-242,共8页
Laser & Optoelectronics Progress
基金
国家重点研发计划基金资助项目(2019YFC1521204,2020YFC1523202)。
关键词
激光雷达
同时定位与建图
传感器融合
自主导航
稀疏位姿优化
lidar
simultaneous localization and mapping
sensor fusion
autonomous navigation
sparse pose optimization