摘要
为提升动态场景中视觉SLAM(Simultaneous Localization and Mapping)系统的定位精度和鲁棒性,提出一种基于光流和实例分割的视觉SLAM方法。针对动态物体和静态背景光流方向的不一致性,提出一种高实时性动态区域掩模检测算法,从而在ORB-SLAM2原有跟踪线程中实时地剔除处于动态区域掩模中的特征点。利用已有深度图和跟踪线程位姿估计的信息去除相机运动相关光流,然后聚类动态物体自身运动产生的光流幅值,从而实现高精度的动态区域掩模检测,并结合对极几何约束剔除局部建图线程中的动态路标点。在TUM和KITTI数据集上的测试结果表明,在高动态场景下,本文算法相较ORB-SLAM2、Detect-SLAM、DS-SLAM,定位精度平均提升97%、64%和44%。相较DynaSLAM,本文算法在一半的高动态场景中定位精度平均提升20%,这验证了本文算法在高动态场景中提升了系统定位精度和鲁棒性。
In order to improve the accuracy and robustness of visual SLAM(Simultaneous Localization and Mapping)systems in dynamic scenes,a visual SLAM algorithm based on optical flow and instance segmentation is proposed.Aiming at the inconsistency of optical flow direction between dynamic objects and static background,feature points in the dynamic region mask can be eliminated in the original tracking thread of ORB-SLAM2 in real time.We use the existing depth map and tracking thread pose estimation information to remove the optical flow related to camera motion and then cluster the optical flow amplitude generated by the dynamic object′s own motion to achieve high-precision dynamic area mask detection.The dynamic landmarks in the local mapping thread are eliminated combined with epipolar geometric constraints.Finally,the test results on TUM and KITTI datasets show that in high dynamic scenes,compared with ORBSLAM2,Detect-SLAM,and DS-SLAM,the accuracy of the proposed algorithm is improved by 97%,64%,and 44%on average.Compared with DynaSLAM,the accuracy has an average increase of 20%in half of the high dynamic scenes,which verifies that the proposed algorithm improves the accuracy and robustness of the system in high dynamic scenes.
作者
徐陈
周怡君
罗晨
Xu Chen;Zhou Yijun;Luo Chen(School of Mechanical Engineering,Southeast University,Nanjing 211189,Jiangsu,China)
出处
《光学学报》
EI
CAS
CSCD
北大核心
2022年第14期139-151,共13页
Acta Optica Sinica
基金
国家自然科学基金(51975119)
江苏省“六大人才高峰”高层次人才项目(GDZB-002)。
关键词
机器视觉
视觉里程计
动态场景
光流
运动物体检测
实例分割
machine vision
visual odometry
dynamic scene
optical flow
motion object detection
instance segmentation