针对移动机器人视觉导航定位需求,提出一种基于双目相机的视觉里程计改进方案。对于特征信息冗余问题,改进ORB(oriented FAST and rotated BRIEF)算法,引入多阈值FAST图像分割思想,为使误匹配尽可能减少,主要运用快速最近邻和随机采样...针对移动机器人视觉导航定位需求,提出一种基于双目相机的视觉里程计改进方案。对于特征信息冗余问题,改进ORB(oriented FAST and rotated BRIEF)算法,引入多阈值FAST图像分割思想,为使误匹配尽可能减少,主要运用快速最近邻和随机采样一致性算法;一般而言,运用的算法主要是立体匹配算法,此算法的特征主要指灰度,对此算法做出改进,运用一种新型的双目视差算法,此算法主要以描述子为特征,据此恢复特征点深度;为使所得位姿坐标具有相对较高的准确度,构造一种特定的最小二乘问题,使其提供初值,以相应的特征点三维坐标为基础,基于有效方式对相机运动进行估计。根据数据集的实验结果可知,所提双目视觉里程具有相对而言较好的精度及较高的实时性。展开更多
Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite ch...Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite challenging for stateof-the-art approaches to handle low-texture scenes.In this paper,we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose.In contrast to direct methods,we choose reprojection error to construct the optimization energy,which can effectively cope with illumination changes.The distance transform map built upon edge detection for each frame is used to improve tracking efficiency.A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy.Experiments on public datasets show that the method is comparable to stateof-the-art methods in terms of tracking accuracy,while being faster and more robust.展开更多
文摘针对移动机器人视觉导航定位需求,提出一种基于双目相机的视觉里程计改进方案。对于特征信息冗余问题,改进ORB(oriented FAST and rotated BRIEF)算法,引入多阈值FAST图像分割思想,为使误匹配尽可能减少,主要运用快速最近邻和随机采样一致性算法;一般而言,运用的算法主要是立体匹配算法,此算法的特征主要指灰度,对此算法做出改进,运用一种新型的双目视差算法,此算法主要以描述子为特征,据此恢复特征点深度;为使所得位姿坐标具有相对较高的准确度,构造一种特定的最小二乘问题,使其提供初值,以相应的特征点三维坐标为基础,基于有效方式对相机运动进行估计。根据数据集的实验结果可知,所提双目视觉里程具有相对而言较好的精度及较高的实时性。
基金National Key R&D Program of China under Grant No.2018YFB2100601National Natural Science Foundation of China under Grant Nos.61872024 and 61702482。
文摘Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite challenging for stateof-the-art approaches to handle low-texture scenes.In this paper,we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose.In contrast to direct methods,we choose reprojection error to construct the optimization energy,which can effectively cope with illumination changes.The distance transform map built upon edge detection for each frame is used to improve tracking efficiency.A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy.Experiments on public datasets show that the method is comparable to stateof-the-art methods in terms of tracking accuracy,while being faster and more robust.