摘要
视觉里程计在智能机器人、自动驾驶等领域有着广泛的应用。但是基于有限视场(FOV)针孔相机的经典视觉里程计算法容易受到环境中运动物体和相机快速旋转的影响,在实际应用中鲁棒性和精度不足。针对这一问题,提出全景环带语义视觉里程计。通过将具有超大视场的全景环带成像系统应用到视觉里程计,并将基于深度学习的全景环带语义分割所提供的语义信息耦合到算法的各个模块,减小运动物体和快速旋转的影响,提高在应对这两种挑战性场景时算法性能。实验结果表明,相较于经典的视觉里程计,所提算法在实际环境下可以实现更加精确和鲁棒的位姿估计。
Visual odometry is commonly used in various applications including intelligent robots and self-driving cars.However,traditional visual odometry algorithms based on the pinhole camera with a limited field of view(FOV)are usually fragile to moving objects in the environment and fast rotation of the camera,resulting in insufficient robustness and accuracy in practical use.This paper proposes panoramic annular semantic visual odometry as a solution to this problem.Using the panoramic annular imaging system with ultra-wide FOV into visual odometry and coupling semantic information provided by the panoramic annular semantic segmentation based on deep learning into each module of the algorithm,the effect of moving objects and fast rotation is reduced,then,the performance of visual odometry in dealing with these challenging scenarios can be improved.Compared with traditional visual odometry systems,experimental results show that the proposed algorithm achieves more accurate and robust pose estimation in realistic scenarios.
作者
陈浩
杨恺伦
胡伟健
白剑
汪凯巍
Chen Hao;Yang Kailun;Hu Weijian;Bai Jian;Wang Kaiwei(National Engineering Research Center of Optical Instrumentation,Zhejiang University,Hangzhou,Zhejiang 310058,China;Institute for Anthropomatics and Robotics,Karlsruhe Institute of Technology,Karlsruhe 76131,Germany)
出处
《光学学报》
EI
CAS
CSCD
北大核心
2021年第22期142-152,共11页
Acta Optica Sinica
基金
浙江大学舜宇智慧光学研究中心项目(2020-03)。
关键词
机器视觉
视觉里程计
全景环带镜头
语义分割
位姿估计
machine vision
visual odometry
panoramic annular lens
semantic segmentation
pose estimation