Simultaneous Localization and Mapping(SLAM)is the foundation of autonomous navigation for unmanned systems.The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM)equipped with camera and the lidar S...Simultaneous Localization and Mapping(SLAM)is the foundation of autonomous navigation for unmanned systems.The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM)equipped with camera and the lidar SLAM equipped with lidar.However,pure visual SLAM have shortcomings such as low positioning accuracy,the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration.First,the Inertial Measurement Unit(IMU)information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion.In particular,to improve the accuracy in pre-integration,the paper uses the RungeKutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment.Then,the IMU pre-integration value is used as the initial value of the system state at the current frame time.We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose,and restore map points’three-dimensional coordinates.Finally,we set a sliding window to optimize map points’coordinates and state variables.The experimental part is divided into dataset experiment and complex indoor-environment experiment.The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM,our method has higher positioning accuracy.展开更多
基金supported by the China Postdoctoral Science Foundation under Grant 2019M653870XBNational Natural Science Foundation of Shanxi Province under Grants No.2020GY-003 and 2021GY-036+1 种基金National Natural Science Foundation of China under Grants 62001340Fundamental Research Funds for the Central Universities,China,XJS211306 and JC2007
文摘Simultaneous Localization and Mapping(SLAM)is the foundation of autonomous navigation for unmanned systems.The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM)equipped with camera and the lidar SLAM equipped with lidar.However,pure visual SLAM have shortcomings such as low positioning accuracy,the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration.First,the Inertial Measurement Unit(IMU)information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion.In particular,to improve the accuracy in pre-integration,the paper uses the RungeKutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment.Then,the IMU pre-integration value is used as the initial value of the system state at the current frame time.We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose,and restore map points’three-dimensional coordinates.Finally,we set a sliding window to optimize map points’coordinates and state variables.The experimental part is divided into dataset experiment and complex indoor-environment experiment.The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM,our method has higher positioning accuracy.