摘要
针对基于特征点匹配的SLAM(Simultaneous Localization and Mapping)系统在缺乏角点的弱纹理区域无法提取足够的特征点而导致位姿估计失败等问题,提出应用直接视觉里程计算法LDSO(Direct Sparse Odometry with Loop Closure)进行室内机器人视觉定位并结合深度估计或深度相机采集到的关键帧深度图,关键帧相机位姿,原始关键帧图像数据,点云拼接生成三维点云稠密地图,实验结果表明,机器人可在复杂环境中准确快速的定位自身位置,且算法在没有全局BA(Bundle Adjustment)的情况下通过位姿图优化显著减少了旋转,平移与尺度漂移等累积误差,算法整体性能与基于特征点匹配的SLAM系统相媲美,耗时更少,实时性更佳,在缺乏角点区域具有较强的鲁棒性.
For the SLAM system based on feature point matching, the problem of pose estimation failure is due to the inability to extract enough feature points in weak texture regions lacking corner points. Applying direct visual odometry method LDSO for indoor robot visual localization and combined with depth estimation or depth camera acquired key frame depth image, key frame camera pose, original key frame image data, point cloud stitching to generate 3 D point cloud dense map, experimental results shows that the robot can be in a complex environment Accurate and fast localization, and the algorithm significantly reduces the cumulative error of rotation, translation and proportional drift by pose optimization without global Bundle Adjustment, so that the overall performance is comparable to SLAM systems based on feature point matching, the algorithm takes less time, has better real-time performance and is more robust in areas lacking corner points.
作者
李奎霖
魏武
高勇
李艳杰
王栋梁
LI Kui-lin;WEI Wu;GAO Yong;LI Yan-jie;WANG Dong-liang(College of Automation Science and Engineering,South China University of Technology,Guangzhou 510440,China)
出处
《微电子学与计算机》
北大核心
2020年第2期51-56,共6页
Microelectronics & Computer
基金
国家自然科学基金(61573148)
广东省科技计划项目(2016A040403012)
广东省省级科技计划项目(2017B090901043).