期刊文献+

基于注意力和长短时记忆网络的视觉里程计 被引量:1

Visual Odometer Based on Attention and LSTM
下载PDF
导出
摘要 近年来通过利用视觉信息估计相机的位姿,实现对无人车的定位成为研究热点,视觉里程计是其中的重要组成部分.传统的视觉里程计需要复杂的流程如特征提取、特征匹配、后端优化,难以求解出最优情况.因此,提出融合注意力和长短时记忆网络的视觉里程计,通过注意力机制增强的卷积网络从帧间变化中提取运动特征,然后使用长短时记忆网络进行时序建模,输入RGB图片序列,模型端到端地输出位姿.在公开的无人驾驶KITTI数据集上完成实验,并与其他算法进行对比.结果表明,该方法在位姿估计上的误差低于其他单目算法,定性分析显示该算法具有较好的泛化能力. In recent years,the use of visual information to estimate the pose of the camera to realize the positioning of unmanned vehicles has become a research hotspot.Visual odometry is an important part of it.Traditional visual odometry requires complex processes such as feature extraction,feature matching,and post-processing.It is difficult to solve the optimal situation.Therefore,a visual odometer that combines attention and long short-term memory(LSTM)was proposed in this paper.The convolutional network was enhanced by the attention mechanism,which extracted motion features from the changes between frames.Then,the long and short-term memory network was used for timing modeling.The input was a sequence of RGB pictures,and a pose of end-to-end was output by the model.The experiment was completed on the public unmanned driving KITTI data set and compared with other algorithms.Results show that the error of the method in pose estimation is lower than that of other monocular algorithms,and through qualitative analysis,it has good generalization ability.
作者 阮晓钢 余鹏程 朱晓庆 RUAN Xiaogang;YU Pengcheng;ZHU Xiaoqing(Faculty of Information Technology,Beijing University of Technology,Beijing 100124,China;Beijing Key Laboratory of Computational Intelligence and Intelligent System,Beijing 100124,China)
出处 《北京工业大学学报》 CAS CSCD 北大核心 2021年第8期815-823,924,共10页 Journal of Beijing University of Technology
基金 国家自然科学基金资助项目(61773027) 北京市自然科学基金资助项目(4202005)。
关键词 深度学习 注意力机制 时序建模 视觉里程计 位姿估计 镜像网络 deep learning attention mechanism sequence modeling visual odometry pose estimation symmetric network
  • 相关文献

参考文献1

二级参考文献19

  • 1Newman E On the structure and solution of the simultaneous localization and map building problem [D]. Sydney: University of Sydney, 2000.
  • 2Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: Part Ⅰ[J]. IEEE Robotics and Automation Magazine, 2006, 13(2): 99-110.
  • 3Bailey T, Durrant-Whyte H. Simultaneous localization and mapping: Part Ⅱ[J]. IEEE Robotics and Automation Magazine, 2006, 13(3): 108-117.
  • 4Montemerlo M, Thrun S, Koller D, et al. FastSLAM: A factored solution to the simultaneous localization and mapping problem[C]. Proc of the AAAI National Conf on Artificial Intelligence. Edmonton, 2002: 593-598.
  • 5Montemerlo M, Thrun S, Koller D, et al. FastSLAM2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges[C]. Proc of the AAAI National Conf on Artificial Intelligence. Acapulco, 2003:1151-1156.
  • 6Bailey T, Nieto J, Nebot E. Consistency of the FastSLAM algorithm[C]. Int Conf on Robotics and Automation. Orlando, 2006: 424-429.
  • 7Wan E A, R van der Merwe. The unscented Kalman filter for nonlinear estimation[C]. Adaptive Systems for Signal Processing, Communications and Control Symposium. Lake Louise, 2000: 153-158.
  • 8Wang X, Zhang H. A UPF-UKF framework for SLAM[C]. Int Conf on Robotics and Automation. Roma, 2007: 1664- 1669.
  • 9Martinez-Cantin R, Castellanos J A. Unscented SLAM for large-scale outdoor environments[C]. Intelligent Robots and Systems. Edmonton, 2005: 3427-3432.
  • 10Kim C, Sakthivel R, Kyun Chung W. Unscented FastSLAM: A robust and efficient solution to the SLAM problem[J]. IEEE Trans on Robotics, 2008, 24(4): 808- 820.

共引文献5

同被引文献1

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部