摘要
随着机器人的工作环境趋向于非结构环境变化,对视觉SLAM技术提出了更高的要求,既需要视觉SLAM能够在静态环境中对机器人精确地估计相机位姿,又需要视觉SLAM能够适应动态环境。针对上述问题,提出一种光流与实例分割相结合的视觉SLAM算法。首先对当前帧图像进行特征提取与实例分割,在特征提取完成后对其进行稀疏光流;然后根据实例分割结果与先验语义信息对物体赋予运动初值,并计算非动态物体的光流位移增量、动态物体光流位移增量和潜在动态物体光流位移增量;其次利用光流信息与实例分割掩码进行联合动态一致性检查,并剔除运动物体上提取的特征点;最后利用静态特征估计机器人位姿。用TUM数据集测试该算法,结果显示该算法在低动态环境下绝对轨迹误差较ORB-SLAM2能够减少52.04%,在高动态环境下绝对轨迹误差较ORB-SLAM2能够减少98.11%。在真实环境下对该算法进行评估,实验结果表明,该算法对物体的运动状态有精准的判定,这有助于提高算法的定位精度。
As the working environment of robots tends to the change of unstructured environment,higher requirements are expected for visual SLAM(simultaneous localization and mapping)technology,which requires not only that the visual SLAM can accurately estimate the camera position and pose of robots in static environment,but also that the visual SLAM can adapt to dynamic environment.In view of the above,a visual SLAM algorithm combining optical flow and instance segmentation is proposed.The feature extraction and instance segmentation are implemented for the current frame image.After the feature extraction is completed,sparse optical flow is performed.Then,the objects are given the initial value of motion probability according to the results of instance segmentation and the prior semantic information,and the optical flow displacement increments of non⁃dynamic objects,dynamic objects and potential dynamic objects are calculated.The optical flow information and the instance segmentation mask are used to check the dynamic consistency jointly,and the feature points extracted from the moving objects are removed.Finally,the robot position and pose are estimated by static features.The proposed algorithm is tested with the data set TUM.The results show that the absolute trajectory errors of the algorithm can be reduced by 52.04%in comparison with ORB⁃SLAM2 in low dynamic environment,and by 98.11%in comparison with ORB⁃SLAM2 in high dynamic environment.The algorithm is evaluated in real environment.The experimental results show that the algorithm can determine the motion state of the object accurately,which is helpful to the improvement of its localization accuracy.
作者
刘强
袁杰
匡本发
LIU Qiang;YUAN Jie;KUANG Benfa(School of Electrical Engineering,Xinjiang University,Urumqi 830017,China)
出处
《现代电子技术》
2023年第19期34-40,共7页
Modern Electronics Technique
基金
国家自然科学基金项目(62263031)
新疆维吾尔自治区自然科学基金项目(2022D01C53)。