期刊文献+

基于直接法与惯性测量单元融合的视觉里程计 被引量:11

Visual Odometry Based on the Direct Method and the Inertial Measurement Unit
原文传递
导出
摘要 针对直接法完全依靠梯度搜索来计算相机位姿、容易陷入局部最优的缺点,本文将惯性测量单元(IMU)数据紧密关联到图像跟踪过程,提供精确的短期运动约束和较好的初始梯度方向信息,并对视觉位姿跟踪结果进行校正,提高单目视觉里程计的跟踪精度.在此基础上,基于相机和IMU测量结果建立传感器数据融合模型,采用滑动窗口的方式优化求解,同时在边缘化过程中根据当前帧与上一关键帧之间的相机帧间运动大小,选择应边缘化的状态量和应加入滑动窗口的状态量,确保在优化过程中有足够准确的先验信息,确保优化融合的效果.实验结果表明,与现有的视觉里程计算法相比,本文算法在数据集上的定位角度累积误差在3°左右,位移累积误差小于0.4 m. Traditionally,the direct method is prone to fall into local optimum because it estimates the camera pose depending on gradient search entirely.For this problem,the IMU(inertial measurement unit)data is tightly associated with camera tracking to provide accurate short-term motion constraints and initial value of gradient direction,and the visual pose tracking result is corrected to improve the tracking accuracy of the monocular visual odometry.Then,the sensor data fusion model is established based on camera and IMU measurements,and the sliding window is used to optimize the solution.During the marginalization process,the state variables which should be marginalized or added to the sliding window are selected according to the camera motion between the current frame and the previous keyframes.In this way,accurate enough prior states for optimization can be ensured to achieve better optimization and fusion performance.Experimental results show that the total orientation error of the proposed algorithm is about 3°and the total translation error is less than 0.4 m compared with the existing visual odometry algorithms.
作者 刘艳娇 张云洲 荣磊 姜浩 邓毅 LIU Yanjiao;ZHANG Yunzhou;RONG Lei;JIANG Hao;DENG Yi(College of Information Science and Engineering,Northeastern University,Shenyang 110819,China)
出处 《机器人》 EI CSCD 北大核心 2019年第5期683-689,共7页 Robot
基金 国家重点研发计划(2017YFC0805005,2017YFB1301103) 十三五装备预研共用技术和领域基金(41412050202) 中央高校基本科研业务专项资金(N172608005) 辽宁省自然科学基金(20180520040)
关键词 视觉里程计 单目视觉 IMU融合 直接法 优化模型 visual odometry monocular vision IMU(inertial measurement unit)fusion direct method optimization model
  • 相关文献

参考文献4

二级参考文献71

  • 1Thrnn S, Burgard W, Fox D. Probabilistic robotics[M]. Cam- bridge, USA: MIT Press, 2005.
  • 2Engel J, Sturm J, Cremers D. Camera-based navigation of a low-cost quadrocopter[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2012: 2815-2821.
  • 3Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2014: 15-22.
  • 4Mur-Artal R, Montiel J M M, Tard6s J D. ORB-SLAM: A ver- satile and accurate monocular SLAM system[J]. IEEE Transac- tions on Robotics, 2015, 31(5): 1147-1163.
  • 5Engel J, SchSps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//13th European Conference on Computer Vision. Berlin, Germany: Springer, 2014: 834-849.
  • 6Ktimmerle R, Grisetti G, Strasdat H, et al. g2o: A general frame- work for graph optimization[C]//IEEE International Conference on Robotics and Automation. Piscataway, USA: 1EEE, 2011: 3607-3613.
  • 7Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
  • 8Bay H, Tuytelaars T, van Gool L. SURF: Speeded up robust fea- tures[C]//9th European Conference on Computer Vision. Berlin, Germany: Springer, 2006: 404-417.
  • 9Rosten E, Drummond T. Machine learning for high-speed cor- ner detection[C]//9th European Conference on Computer Vi- sion. Berlin, Germany: Springer, 2006: 430-443.
  • 10Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alter- native to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2011: 2564-2571.

共引文献47

同被引文献55

引证文献11

二级引证文献29

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部