摘要
为了减少动态目标对移动机器人视觉同步定位与地图构建(simultaneous localization and mapping,SLAM)系统性能的影响,提出了一种基于语义分割和光流的视觉SLAM方法。首先,融合卷积神经网络MobileNetV2和语义分割网络DeepLabv3+提取动态目标特征,使特征提取网络模型轻量化;然后,利用改进的DeepLabv3+结合Lucas-Kanade光流法实现对环境中动态目标的检测与剔除,获取静态目标上的特征点并进行匹配与位姿估计;最后,在关键帧上使用时间加权多帧融合技术,对动态目标遮挡的部分进行背景修复,为重定位环节提供更准确的匹配信息,进一步提升定位精度。在TUM RGB-D动态场景数据集上的实验结果表明,与ORB-SLAM2相比,该视觉SLAM算法在保证系统实时性的同时,使得定位精度提升约97%,明显降低了绝对轨迹误差和相对位姿误差,有效消除了动态目标对位姿估计的影响,提升了SLAM系统位姿估计的准确性和鲁棒性。
To reduce the impact of moving objects on the performance of visual simultaneous localization and mapping(SLAM)system of mobile robots,a visual SLAM method based on semantic segmentation and optical flow was proposed.Firstly,the convolutional neural network MobileNetV2 and the semantic segmentation network DeepLabv3+were integrated to extract dynamic target features,which made the feature extraction network model lightweight.Secondly,the improved DeepLabv3+combined with the Lucas-Kanade optical flow method was used to detect and eliminate dynamic targets in the environment,obtain feature points on static targets,and perform matching and pose estimation.Finally,the time-weighted multi-frame fusion technology was used on key frames to repair the background of the part occluded by the dynamic target,so as to provide more accurate matching information for the relocation process and further improve the positioning accuracy.The experimental results on the TUM RGB-D dynamic scene dataset show that,the visual SLAM algorithm proposed in this paper can ensure the real-time performance of the system.Compared with ORB-SLAM2,the proposed algorithm improves the positioning accuracy by 97%,significantly reduces the absolute trajectory and relative pose error,and effectively eliminates the influence of dynamic targets on the pose estimation.The accuracy and robustness of the SLAM system are improved.
作者
郭一冉
李一鸣
黄民
GUO Yiran;LI Yiming;HUANG Min(Mechanical Electrical Engineering School,Beijing Information Science&Technology University,Beijing 100192,China;MOE Key Laboratory of Modern Measurement&Control Technology,Beijing Information Science&Technology University,Beijing 100192,China)
出处
《北京信息科技大学学报(自然科学版)》
2023年第4期9-18,共10页
Journal of Beijing Information Science and Technology University
基金
北京市教委科研计划科技一般项目(KM202011232011)。
关键词
同步定位与地图构建
动态场景
位姿估计
DeepLabv3+
光流法
simultaneous localization and mapping(SLAM)
dynamic environments
pose estimation
DeepLabv3+
optical flow method