摘要
针对视觉SLAM在动态环境中由于物体的移动会使得位姿估计过程中对特征点造成误匹配,导致定位精度较差的问题,提出了动态环境下基于卷积神经网络的视觉SLAM方法.利用卷积神经网络的Mask R-CNN方法,将ORB SLAM与Mask R-CNN算法进行有效融合,并利用极线几何法对动态的特征点进行剔除.在公开数据集中与ORB SLAM2算法进行对比试验,结果表明,本文算法解决了动态特征分布点的误匹配造成定位准确度较差的问题,提高了优化后SLAM系统的定位精度.
Aiming at the problem of visual SLAM in the dynamic environment,in which the movement of objects will cause mismatches with feature points during the pose estimation process and the consequent poor positioning accuracy,a visual SLAM method based on convolutional neural network in dynamic environment was proposed.By using the Mask R-CNN method of convolutional neural network,ORB SLAM and Mask R-CNN algorithms were effectively integrated,and the epipolar geometry method was used to eliminate dynamic feature points.A comparative experiment with ORB SLAM2 algorithm was carried out in public data set.The results show that the as-proposed method can solve the problem of poor positioning accuracy caused by the mismatch of dynamic feature distribution points,and the positioning accuracy of optimized SLAM system is improved.
作者
张凤
王伟良
袁帅
孙明智
ZHANG Feng;WANG Wei-liang;YUAN Shuai;SUN Ming-zhi(School of Information and Control Engineering,Shenyang Jianzhu University,Shenyang 110168,China)
出处
《沈阳工业大学学报》
CAS
北大核心
2022年第6期688-693,共6页
Journal of Shenyang University of Technology
基金
国家自然科学基金项目(62073227)
辽宁省教育厅基金项目(LJKZ0581).
关键词
动态环境
SLAM方法
深度学习
特征点匹配
极线几何法
卷积神经网络
视觉相机
相机定位
dynamic environment
SLAM method
deep learning
feature point matching
epipolar geometry method
convolutional neural network
vision camera
camera positioning