期刊文献+

动态环境下基于线特征的RGB-D视觉里程计 被引量:16

RGB-D Visual Odometry in Dynamic Environments Using Line Features
原文传递
导出
摘要 基于RGB-D的视觉SLAM(同时定位与建图)算法基本都假设环境是静态的,然而在实际环境中经常会出现动态物体,导致SLAM算法性能的下降.为此,本文提出一种基于线特征的RGB-D视觉里程计方法,通过计算直线特征的静态权重来剔除动态直线特征,并根据剩余的直线特征估计相机位姿.本文方法既可以减小动态物体的影响,又能避免点特征过少而导致的跟踪失效.公共数据集实验结果表明,与现有的基于ORB(orientedFAST and rotated BRIEF)点特征的方法相比,本文方法减小了动态环境下的跟踪误差约30%,提高了视觉里程计在动态环境下的精度和鲁棒性. Most of RGB-D SLAM (simultaneous localization and mapping)methods assume that the environments are static.However,there are often dynamic objects in real world environments,which can degrade the SLAM performance. In order to solve this problem,a line feature-based RGB-D (RGB-depth)visual odometry is proposed.It calculates static weights of line features to filter out dynamic line features,and uses the rest of line features to estimate the camera pose.The proposed method not only reduces the influence of dynamic objects,but also avoids the tracking failure caused by few point features.The experiments are carried out on a public dataset.Compared with state-of-the-art methods like ORB (oriented FAST and rotated BRIEF)method,the results demonstrate that the proposed method reduces the tracking error by about 30% and improves the accuracy and robustness of visual odometry in dynamic environments.
作者 张慧娟 方灶军 杨桂林 ZHANG Huijuan;FANG Zaojun;YANG Guilin(University of Chmese Academy of Sciences,Beijing 100049,China;Ningbo Institute of Material Technology and Engineering,Chinese Academy of Sciences,Ningbo 315201,China;Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology,Ningbo 315201,China)
出处 《机器人》 EI CSCD 北大核心 2019年第1期75-82,共8页 Robot
基金 国家自然科学基金-浙江两化融合联合基金(U1509202) 国家重点研发计划(2017YFB1300400) 浙江省重点研发计划(2018C01086)
关键词 同时定位与建图 视觉里程计 线特征 动态环境 RGB-D simultaneous localization and mapping visual odometry line feature dynamic environment RGB-depth
  • 相关文献

参考文献5

二级参考文献66

  • 1阮秋琦.数字图像处理学[M].北京:电子工业出版社,2007.
  • 2Endres F, Hess J, Engelhard N, et al. An evaluation of the RGB- D SLAM system[J]. Perception, 2012, 3(c): 1691-1696.
  • 3Konolige K, Mihelich E Technical description of Kinect cal- ibration[N/OL]. [2011-11-03]. http://www.ros.org/wiki/kinect_ calibration/technical.
  • 4Henry P, Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. International Journal of Robotics Research, 2012, 31(5): 647-663.
  • 5Besal P J, McKay H D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256.
  • 6Dryanovski I, Valenti R G, Xiao J Z. Fast visual odometry and mapping from RGB-D data[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2013: 2305-2310.
  • 7Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381-395.
  • 8Ktimmerle R, Grisetti G, Strasdat H, et al. G2o: A general framework for graph optinization[C]//IEEE International Con- ference on Robotics and Automation. Piscataway, USA: IEEE, 2011: 3607-3613.
  • 9Rosten E, Porter R, Drummond T. Faster and better: A machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1): 105- 119.
  • 10Calonder M, Lepetit V, Strecha C, et al. BRIEF binary ro- bust independent elementary feature[C]//European Conference on Computer Vision. Berlin, Germany: Springer, 2010: 778- 792.

共引文献83

同被引文献52

引证文献16

二级引证文献62

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部