Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS mea...Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter.展开更多
提出一种基于稳健特征点的立体视觉测程法完成机器人自主高精度定位.从可重复性、精确性和效率3个方面比较多种局部不变特征算法性能,采用稳健特征算法AKAZE(AcceleratedKAZE)提取特征点.提出了一个稳定的特征点匹配框架和改进的随机抽...提出一种基于稳健特征点的立体视觉测程法完成机器人自主高精度定位.从可重复性、精确性和效率3个方面比较多种局部不变特征算法性能,采用稳健特征算法AKAZE(AcceleratedKAZE)提取特征点.提出了一个稳定的特征点匹配框架和改进的随机抽样一致性算法(Random Sample Consensus,RANSAC)去除外点,使文中的视觉测程法可以应用于动态环境中.基于几何约束的分步自运动估计可提供相机运动的精确信息.将提出的方法在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)数据集上和复杂校园环境中所采集的立体视觉数据集上进行测试,与经典立体视觉测程方法比较,文中的方法更好地抑制了误差累计,运动估计结果满足实时高精度定位系统需求.展开更多
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error acc...In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM).展开更多
文摘Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter.
文摘提出一种基于稳健特征点的立体视觉测程法完成机器人自主高精度定位.从可重复性、精确性和效率3个方面比较多种局部不变特征算法性能,采用稳健特征算法AKAZE(AcceleratedKAZE)提取特征点.提出了一个稳定的特征点匹配框架和改进的随机抽样一致性算法(Random Sample Consensus,RANSAC)去除外点,使文中的视觉测程法可以应用于动态环境中.基于几何约束的分步自运动估计可提供相机运动的精确信息.将提出的方法在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)数据集上和复杂校园环境中所采集的立体视觉数据集上进行测试,与经典立体视觉测程方法比较,文中的方法更好地抑制了误差累计,运动估计结果满足实时高精度定位系统需求.
文摘In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM).