摘要
现有的单目视觉SLAM方案为了提高精度,大多都是通过增加各种传感器来实现的,这并没有将单目相机的表现发挥到极致。文章提出了一个基于ORB-SLAM3的视觉SLAM系统,旨在最大化地利用单目资源,在单目相机的基础上通过增加深度预测网络来模拟深度相机,利用CNN和ORB融合的方法进行特征点提取,并结合深度图进行特征过滤,旨在提高驾驶场景单目相机位姿预测精度,同时为避免动态对象对SLAM系统造成的干扰,文章引入了图像的实例分割网络。
Existing monocular visual SLAM solutions are mostly achieved by adding various sensors to improve the accuracy of monocular cameras,which does not demonstrate the best performance.This paper proposes a visual SLAM system based on ORB-SLAM3,which aims to maximize monocular resources.On the basis of monocular cameras,depth map prediction network is added to simulate depth cameras.Feature points are extracted using CNN and ORB fusion methods,and feature filtering is carried out combined with depth maps.In order to improve the pose prediction accuracy of monocular camera in driving scenes and avoid the interference of dynamic objects to SLAM system,an image instance segmentation network is introduced as well as.
作者
白宗文
刘向臻
Bai Zongwen;Liu Xiangzhen(School of Physics and Electronic Information,Yan’an University,Yan’an 716000,China)
出处
《延安大学学报(自然科学版)》
2023年第1期1-6,共6页
Journal of Yan'an University:Natural Science Edition
基金
国家自然科学基金项目(62266045)。
关键词
视觉SLAM
单目
深度图预测
实例分割
visual SLAM
monocular
depth prediction
instance segmentation