摘要
针对使用RGB-D相机的VSLAM中RGB-D数据存在耦合误差的问题、现有特征提取方法存在的边点误提取问题及恒速运动模型跟踪稳定性差的问题,基于ORB-SLAM2框架提出了CEP-SLAM算法。算法使用恒加速运动模型来设置待跟踪帧的初始位姿;使用优化后的位姿计算帧间视觉里程计并更新恒加速运动模型,结合RGB图与深度图的采集时间差估算位姿偏移,基于该位姿偏移构建对极几何约束,使用二分法查找特征点在深度图对应像素点的位置,对特征点深度进行调整,缓解了RGB-D数据耦合误差对VSLAM的影响;提出一种基于联合方法的关键帧边点剔除算法,通过利用特征点在深度图的邻域信息对待插入关键帧中存在的不良边点进行判断和剔除。使用本文提出的CEP-SLAM算法在TUM公共数据集上进行实验,结果表明本文算法较好地剔除了不良边点,与经典算法相比有更好的鲁棒性、跟踪稳定性和更高的定位精度。
Aiming at the issues of coupling errors in RGB-D data,incorrect extraction of edge-point in current methods of feature extraction and poor tracking stability of the constant speed motion model in VSLAM using RGB-D cameras,CEP-SLAM algorithm was proposed based on ORB-SLAM2 framework.The algorithm used a constant acceleration motion model to set the initial pose of the tracked frame;the optimized pose was used to calculate the inter-frame visual odometry and update the constant acceleration motion model.The pose deviation was estimated by combining the constant acceleration motion model and the acquisition time difference of RGB image and depth image.The epipolar geometry constraint was constructed based on the pose deviation and the dichotomy method was used to find the position of the feature-point in the corresponding pixel point of the depth image,and the depth of the feature-point was adjusted to alleviate the impact of RGB-D data coupling errors on VSLAM;a keyframe edge-point culling algorithm based on joint method was proposed.The bad edge-point in the inserted key-frame were judged and culled by using the neighborhood information of feature-point in the depth image.The CEP-SLAM algorithm proposed in this paper was used to conduct experiment on TUM public dataset.The experiment results show that the proposed algorithm can better cull bad edgepoint,and has better robustness,tracking stability and higher positioning accuracy compared with classical algorithms.
作者
李林其
常敏
侯晓煜
贾彩琴
庞敏
LI Linqi;CHANG Min;HOU Xiaoyu;JIA Caiqin;PANG Min(School of Computer Science and Technology,North University of China,Taiyuan 030051,China;Shanxi Key Laboratory of Machine Vision and Virtual Reality,Taiyuan 030051,China;Shanxi Province’s Vision Information Processing and Intelligent Robot Engineering Research Center,Taiyuan 030051,China;Jinxi Industrial Group Co.,Ltd.,Taiyuan 030027,China)
出处
《中北大学学报(自然科学版)》
CAS
2024年第5期614-627,共14页
Journal of North University of China(Natural Science Edition)
基金
国家自然科学基金资助项目(62272426)
山西省科技成果转化引导专项(202104021301055)
山西省自然科学基金资助项目(202303021211153)。