摘要
以视觉惯性为核心的导航技术在长期运行工况下误差会不断累积,进而产生严重的轨迹偏移。针对该问题,提出一种全局低偏的视觉/惯性/弱定位辅助融合导航系统。该系统以视觉惯性里程计高频率、高局部精度特性为基础,在室内外不同场景中,提供可选的全局信息辅助方案,融合卫星导航原始信息、超声基站测距信息以及视觉靶标定位辅助信息,实现室内外一体化全局低偏导航。搭建数据平台采集现实数据,基于激光点云匹配方法生成轨迹真值,将本文方法与VINS-Mono方法和ORBSLAM3方法进行导航精度评估对比,证明所提出的系统在全天候室内外不同光照环境中鲁棒性最好,在局部和全局都具备最优的导航精度。
In recent years,the rapid development of mobile robots,autonomous driving,drones and other technologies has increased the demand for high-precision navigation in complex environments.Visual-inertial odometry has been widely used in the field of robot navigation because of its low cost and high practicability.However,due to its relative measurement principle,the cumulative error can increase significantly during long-term operation of the system.To solve this problem,a global low-bias visual/inertial/weak-positional-aided fusion navigation system is proposed.The system provides optional solutions to integrate several unbiased positioning information such as Global Navigation Satellite System(GNSS)satellite navigation original information,US base station ranging information,and visual target positioning auxiliary information,fully combining the advantages of global information and visual-inertial odometry.Thus,high precision,high continuity,high real-time,indoor and outdoor integration of low-bias global navigation results are obtained.The main frame of the system designed in this paper is a factor graph model,based on the visual-inertial odometry,which ensures the high frequency pose output and seamless indoor/outdoor switching of the system.The visual-inertial residual factor is defined based on the visual reprojection model and IMU pre-integration model.For different application scenarios,GNSS constraints and ultrasonic constraints are introduced in the form of optional factors and state quantities,and GNSS residuals and ultrasonic residuals are defined.Among them,the GNSS factor constructs residuals with pseudorange measurements and Doppler shift information.The ultrasonic factor constructs the residual from the ultrasonic positioning results and the ultrasonic base station distance measurement.At the same time,ArUco visual information correction optional module is provided.Based on ArUco marker position prior information and ArUco target recognition algorithm,ArUco assisted global pose optimization method is defined.A wheeled robot platform equipped with multiple sensors,such as cameras and LiDAR,was built to collect data and conduct algorithm testing in the underground parking lot and the connected above-ground architectural complex area.The experimental scene was scanned by laser scanner and the map truth value was generated.The VIO assisted LiDAR point cloud and map prior were used for point cloud registration to obtain accurate track truth value.The positioning and navigation performance of three different methods,namely,proposed method,VINS-Mono method and ORB-SLAM3 method,were tested respectively in three scenarios:indoor,in-outdoor during the day and in-outdoor at night.The test results show that in the three scenarios,the RPE and ATE evaluation results of the proposed method are superior to the other methods.Especially under harsh conditions,the ATE RMSE of the proposed method is 3.495 m in in-outdoor scenes at night,which is significantly better than VINS-Mono(10.77 m)and ORB-SLAM3(15.02 m).In addition,the experiment also tests the comparison between the proposed method using VIO+ArUco module and the VINS-Mono method with the loopback detection function enabled,proving that the introduction of ArUco module is of great significance for eliminating global cumulative errors and improving global navigation accuracy,and can solve the problem of loopback detection failure to a certain extent.In general,this paper presents an extensible multi-modal information weak-aided visual inertial navigation system.The experimental results show that the proposed system has excellent global positioning accuracy and universality in different scenes.In future work,the range of multi-modal information can be further expanded to explore the integration scheme of sensor information such as LiDAR,magnetometer and other sensing characteristics.
作者
徐宇枫
刘沅秩
秦明辉
赵辉
陶卫
XU Yufeng;LIU Yuanzhi;QIN Minghui;ZHAO Hui;TAO Wei(School of Sensing Science and Engineering,School of Electronic Information and Electrical Engineering,Shanghai Jiao Tong University,Shanghai 200240,China)
出处
《光子学报》
EI
CAS
CSCD
北大核心
2024年第4期209-220,共12页
Acta Photonica Sinica
基金
国家重点研究发展计划(No.2018YFB1305005)。
关键词
视觉惯性里程计
全球卫星导航系统
超声定位
视觉靶标
室内外一体化
Visual-inertial odometry
Global navigation satellite system
Ultrasonic positioning
Visual marker
Indoor and outdoor integration