期刊文献+

室内动态环境下基于深度学习的视觉里程计 被引量:3

Visual Odometer Based on Deep Learning in Dynamic Indoor Environment
下载PDF
导出
摘要 传统的视觉同步定位与建图(Visual Simultaneous Localization and Mapping,VSLAM)算法大多基于外部环境为静态的假设,在动态环境下受到动态物体的干扰很容易出现相机位姿估计不准确,稳定性差等情况.为解决这一问题,提出了一种面向室内动态场景的视觉里程计,该算法在ORB-SLAM 2的视觉里程计基础上结合YOLOv4目标检测网络,在提取图像特征点的同时进行目标检测获取图像中的语义信息,根据语义信息确定动态物体的范围.此外,提出一种动态特征点剔除策略,先根据目标检测结果剔除动态目标上的特征点,接着分别利用对极几何约束与光流约束对图像中可能残余的动态点彻底过滤,后续依靠剩余的静态点完成对相机位姿的精确求解.经过在TUM数据集上实验证明,相比ORB-SLAM 2,在高动态场景下改进后的系统绝对轨迹误差和相对位姿误差平均减小了90%以上,定位精度大幅度提高,并且系统跟踪线程处理每帧图像平均所用时间在85ms左右,能够实时运行. Visual odometry in traditional Visual Simultaneous Localization and Mapping(VSLAM)systems aremostly based on the assumption that the external environment is static.It is easy to get inaccurate camera position estimation and poor stability when disturbed by dynamic objects in dynamic environment.To deal with this problem,a visual odometry method for dynamic indoor environment is presented.This method combines the yolov4 target detection network on the basis of the visual odometryofORB-SLAM 2,with target detection while extracting image feature points to acquire semantic information in the images,which determines the range of dynamic objects according to the semantic information.In addition,a dynamic feature point removal strategy is proposed.Firstly,the feature points on the dynamic target are removed according to the target detection results.Secondly,the possible residual dynamic points in the image are thoroughly filtered by epipolar geometry constraints and optical flow constraints respectively.Finally,the remaining static feature points are used to complete the accurate estimation of camera pose.Experiments on TUM datasets show that,compared with ORB-SLAM2,the absolute track error and relative position error of the improved system are reduced by more than 90%on average in high dynamic scenes,and the positioning accuracy is greatly improved.The average time taken by the system tracking thread to process each image is about 85 ms,which can meet the real-time requirements.
作者 李博 段中兴 LI Bo;DUAN Zhong-xing(College of Information and Control Engineering,Xi′an University of Architecture and Technology,Xi′an 710055,China)
出处 《小型微型计算机系统》 CSCD 北大核心 2023年第1期49-55,共7页 Journal of Chinese Computer Systems
基金 国家自然科学基金项目(51678470)资助.
关键词 视觉SLAM 视觉里程计 室内动态环境 YOLOv4 动态特征点剔除 VSLAM visual odometry dynamic indoor environment YOLOv4 removal of dynamic feature points
  • 相关文献

参考文献3

二级参考文献20

  • 1Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Transactions on Robotics, 2007, 23( 1 ): 34-46.
  • 2Dissanayake M W M G, Newman P, Clark S, et al. A solution to the simultaneous localization and map building (SLAM) prob- lem[J]. IEEE Transactions on Robotics and Automation, 2001, 17(3): 229-241.
  • 3Nikolic J, Rehder J, Burri M, et al. A synchronized visual- inertial sensor system with FPGA pre-processing for accu- rate real-time SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2014: 431- 437.
  • 4Hornung A, Phillips M, Jones E G, et al. Navigation in three-dimensional cluttered environments for mobile manipu- lation[C]//IEEE International Conference on Robotics and Au- tomation. Piscataway, USA: IEEE, 2012: 423-429.
  • 5Liu M, Siegwart R. Navigation on point-cloud - A Rie- mannian metric approach[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2014: 4088-4093.
  • 6Civera J, Davison A J, Montiel J M M. Inverse depth parametrization for monocular SLAM[J]. IEEE Transactions on Robotics, 2008, 24(5): 932-945.
  • 7Sadat S A, Chutskoff K, Jungic D, et al. Feature-rich path planning for robust navigation of MAVs with mono- SLAM[C]//IEEE International Conference on Robotics and Au- tomation. Piscataway, USA: IEEE, 2014: 3870-3875.
  • 8Khoshelham K, Elberink S O. Accuracy and resolution of kinect depth data for indoor mapping applications[J]. Sensors, 2012, 12(2): 1437-1454.
  • 9H6gman V. Building a 3D map from RGB-D sensors[D]. Stockholm, Sweden: Royal Institute of Technology, 2012.
  • 10Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Us- ing depth cameras for dense 3D modeling of indoor envi- ronments[C]//12th International Symposium on Experimental Robotics. Berlin, Germany: Springer, 2014: 477-491.

共引文献167

同被引文献56

引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部