期刊文献+
共找到53篇文章
< 1 2 3 >
每页显示 20 50 100
KLT-VIO:Real-time Monocular Visual-Inertial Odometry
1
作者 Yuhao Jin Hang Li Shoulin Yin 《IJLAI Transactions on Science and Engineering》 2024年第1期8-16,共9页
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte... This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation. 展开更多
关键词 visual-inertial odometry Opticalflow Point features Line features Bundle adjustment
原文传递
Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality 被引量:5
2
作者 Jinyu LI Bangbang YANG +3 位作者 Danpeng CHEN Nan WANG Guofeng ZHANG Hujun BAO 《Virtual Reality & Intelligent Hardware》 2019年第4期386-410,共25页
Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ... Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/. 展开更多
关键词 visual-inertial SLAM odometry Tracking LOCALIZATION Mapping Augmented reality
下载PDF
PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration
3
作者 Yao Xiao Xiaogang Ruan Xiaoqing Zhu 《Journal of Autonomous Intelligence》 2018年第2期29-35,共7页
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ... Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 展开更多
关键词 PHOTOMETRIC Calibration visual-inertial odometry SIMULTANEOUS Localization and Mapping Robot Navigation
下载PDF
走廊场景下辅助视觉里程计初始化的单目深度恢复方法
4
作者 徐晓苏 刘烨豪 +3 位作者 姚逸卿 夏若炎 王子健 范明泽 《中国惯性技术学报》 EI CSCD 北大核心 2024年第8期753-761,共9页
单目相机由于缺少尺度信息在视觉里程计等应用场景中性能受限。现有研究大多通过基于深度学习的方法解决这一问题,但其推理速度慢,难以实时运行。针对这一问题,提出了一种走廊场景下基于非线性优化进行快速单目深度恢复的显式方法。采... 单目相机由于缺少尺度信息在视觉里程计等应用场景中性能受限。现有研究大多通过基于深度学习的方法解决这一问题,但其推理速度慢,难以实时运行。针对这一问题,提出了一种走廊场景下基于非线性优化进行快速单目深度恢复的显式方法。采用虚拟相机假设,简化对相机姿态角的求解;通过最小化几何残差,将深度估计问题转换为优化问题;设计一种深度平面构建方法,对空间点深度进行分类,实现走廊等封闭结构场景下的快速深度估计;最后,将所提方法在单目视觉里程计初始化中进行应用,使得单目视觉里程计可以获得真实的尺度信息,并提升其定位精度。实验结果表明:所提方法在走廊场景3m范围内深度估计的相对误差小于8.4%,在Intel Core i5-7300HQCPU处理器中能以20FPS的速度实时运行。 展开更多
关键词 视觉里程计 单目深度估计 深度恢复 非线性优化
下载PDF
一种在线更新的单目视觉里程计
5
作者 王铭敏 佃松宜 钟羽中 《计算机应用研究》 CSCD 北大核心 2024年第7期2209-2214,共6页
现有的基于深度学习的视觉里程计(visual odometry,VO)训练样本与应用场景存在差异时,普遍存在难以适应新环境的问题,因此提出了一种在线更新单目视觉里程计算法OUMVO。其特点在于应用阶段利用实时采集到的图像序列在线优化位姿估计网... 现有的基于深度学习的视觉里程计(visual odometry,VO)训练样本与应用场景存在差异时,普遍存在难以适应新环境的问题,因此提出了一种在线更新单目视觉里程计算法OUMVO。其特点在于应用阶段利用实时采集到的图像序列在线优化位姿估计网络模型,提高网络的泛化能力和对新环境的适用能力。该方法使用了自监督学习方法,无须额外标注地面真值,并采用了Transformer对图像流进行序列建模,以充分利用局部窗口内的视觉信息,提高位姿估计精度,以避免传统方法只能利用相邻两帧图像来估计位姿的局限,还可以弥补采用RNN进行序列建模无法并行计算的缺点。此外,采用图像空间几何一致性约束,解决了传统单目视觉里程计算法存在的尺度漂移问题。在KITTI数据集上的定量和定性实验结果表明,OUMVO的位姿估计精度和对新环境的适应能力均优于现有的先进单目视觉里程计方法。 展开更多
关键词 视觉里程计 单目视觉 在线更新 自监督学习 Transformer神经网络
下载PDF
基于激光雷达和视觉的双里程计SLAM应用研究
6
作者 马延征 曹一帆 +2 位作者 郭恒敏 丁一凡 成思怡 《移动信息》 2024年第9期328-330,337,共4页
文中研究了基于激光雷达和视觉的双里程计融合SLAM,以弥补激光雷达在弱纹理场景下的不足,实现单目视觉里程计对激光雷达点云的补充,提高建图精度.实现方案以2D激光雷达采集的点云数据为主,将单目视觉传感器作为激光雷达点云盲区的补充,... 文中研究了基于激光雷达和视觉的双里程计融合SLAM,以弥补激光雷达在弱纹理场景下的不足,实现单目视觉里程计对激光雷达点云的补充,提高建图精度.实现方案以2D激光雷达采集的点云数据为主,将单目视觉传感器作为激光雷达点云盲区的补充,搭建双里程计SLAM实验平台,通过实时机器人构建地图并获取当前的位置信息,降低成本. 展开更多
关键词 机器人 激光雷达 双里程计 单目视觉 点云地图
下载PDF
M2C-GVIO:motion manifold constraint aided GNSS-visual-inertial odometry for ground vehicles
7
作者 Tong Hua Ling Pei +3 位作者 Tao Li Jie Yin Guoqing Liu Wenxian Yu 《Satellite Navigation》 EI CSCD 2023年第1期77-91,I0003,共16页
Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.Howeve... Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency. 展开更多
关键词 Sensor fusion visual-inertial odometry Motion manifold constraint
原文传递
Monocular VO Scale Ambiguity Resolution Using an Ultra Low-Cost Spike Rangefinder
8
作者 Ahmed El Amin Ahmed El-Rabbany 《Positioning》 2020年第4期45-60,共16页
Monocular visual odometry (VO) is the process of determining a user’s trajectory through a series of consecutive images taken by a single camera. A major problem that affects the accuracy of monocular visual odometry... Monocular visual odometry (VO) is the process of determining a user’s trajectory through a series of consecutive images taken by a single camera. A major problem that affects the accuracy of monocular visual odometry, however, is the scale ambiguity. This research proposes an innovative augmentation technique, which resolves the scale ambiguity problem of monocular visual odometry. The proposed technique augments the camera images with range measurements taken by an ultra-low-cost laser device known as the Spike. The size of the Spike laser rangefinder is small and can be mounted on a smartphone. Two datasets were collected along precisely surveyed tracks, both outdoor and indoor, to assess the effectiveness of the proposed technique. The coordinates of both tracks were determined using a total station to serve as a ground truth. In order to calibrate the smartphone’s camera, seven images of a checkerboard were taken from different positions and angles and then processed using a MATLAB-based camera calibration toolbox. Subsequently, the speeded-up robust features (SURF) method was used for image feature detection and matching. The random sample consensus (RANSAC) algorithm was then used to remove the outliers in the matched points between the sequential images. The relative orientation and translation between the frames were computed and then scaled using the spike measurements in order to obtain the scaled trajectory. Subsequently, the obtained scaled trajectory was used to construct the surrounding scene using the structure from motion (SfM) technique. Finally, both of the computed camera trajectory and the constructed scene were compared with ground truth. It is shown that the proposed technique allows for achieving centimeter-level accuracy in monocular VO scale recovery, which in turn leads to an enhanced mapping accuracy. 展开更多
关键词 SPIKE Visual odometry monocular SCALE
下载PDF
Bags of tricks for learning depth and camera motion from monocular videos
9
作者 Bowen DONG Lu SHENG 《Virtual Reality & Intelligent Hardware》 2019年第5期500-510,共11页
Background Based on the seminal work proposed by Zhou et al., much of the recent progress in learning monocular visual odometry, i.e., depth and camera motion from monocular videos, can be attributed to the tricks in ... Background Based on the seminal work proposed by Zhou et al., much of the recent progress in learning monocular visual odometry, i.e., depth and camera motion from monocular videos, can be attributed to the tricks in the training procedure, such as data augmentation and learning objectives. Methods Herein, we categorize a collection of such tricks through the theoretical examination and empirical evaluation of their effects on the final accuracy of the visual odometry. Results/Conclusions By combining the aforementioned tricks, we were able to significantly improve a baseline model adapted from SfMLearner without additional inference costs. Furthermore, we analyzed the principles of these tricks and the reason for their success. Practical guidelines for future research are also presented. 展开更多
关键词 Unsupervised learning monocular visual odometry
下载PDF
Lightweight hybrid visual-inertial odometry with closed-form zero velocity update 被引量:5
10
作者 QIU Xiaochen ZHANG Hai FU Wenxing 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第12期3344-3359,共16页
Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-... Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community. 展开更多
关键词 Inverse depth parametrization Kalman filter Online calibration visual-inertial odometry Zero velocity update
原文传递
视觉里程计技术综述 被引量:27
11
作者 李宇波 朱效洲 +1 位作者 卢惠民 张辉 《计算机应用研究》 CSCD 北大核心 2012年第8期2801-2805,2810,共6页
视觉里程计是通过视觉信息估计运动信息的技术,其中采用了里程计式的方法。该技术作为一种新的导航定位方式,已成功地运用于自主移动机器人中。首先介绍了常用的两种视觉里程计即单目视觉里程计和立体视觉里程计,然后从鲁棒性、实时性... 视觉里程计是通过视觉信息估计运动信息的技术,其中采用了里程计式的方法。该技术作为一种新的导航定位方式,已成功地运用于自主移动机器人中。首先介绍了常用的两种视觉里程计即单目视觉里程计和立体视觉里程计,然后从鲁棒性、实时性和精确性三个方面详细讨论了视觉里程计技术的研究现状,最后对视觉里程计的发展趋势进行了展望。 展开更多
关键词 视觉里程计 自主移动机器人 单目视觉里程计 立体视觉里程计 鲁棒性 实时性 精确性
下载PDF
单目视觉里程计/惯性组合导航算法(英文) 被引量:15
12
作者 冯国虎 吴文启 +1 位作者 曹聚亮 宋敏 《中国惯性技术学报》 EI CSCD 北大核心 2011年第3期302-306,共5页
提出一种单目视觉里程计/捷联惯性组合导航定位算法。与视觉里程计估计相机姿态不同,惯导系统连续提供相机拍摄时刻对应的三维姿态,克服了单纯由视觉估计相机姿态精度低造成的长距离导航误差大的问题。通过配准和时间同步,用惯导系统解... 提出一种单目视觉里程计/捷联惯性组合导航定位算法。与视觉里程计估计相机姿态不同,惯导系统连续提供相机拍摄时刻对应的三维姿态,克服了单纯由视觉估计相机姿态精度低造成的长距离导航误差大的问题。通过配准和时间同步,用惯导系统解算的速度和视觉里程计计算的速度之差作为组合导航的观测量,采用Kalman滤波修正组合导航系统的误差,同时估计视觉里程计标度因数误差。分别在室内外不同环境下进行了22 m的推车实验和1412 m的跑车实验,定位误差分别为3.2%和4.0%。与Clark采用姿态传感器定期更新相机姿态估计结果的方法相比,单目视觉里程计/惯性组合导航定位精度更高,定位误差随距离增长率低,适合步行机器人或轮式移动机器人在复杂地形环境下车轮严重打滑时的自主定位导航。 展开更多
关键词 单目视觉里程计 捷联惯性组合导航系统 组合导航 标度因数
下载PDF
融合光流与特征点匹配的单目视觉里程计 被引量:16
13
作者 郑驰 项志宇 刘济林 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2014年第2期279-284,共6页
针对城市平坦路面准确实时定位的问题,提出将光流跟踪法与特征点匹配进行卡尔曼融合的单目视觉里程计方法.基于平面假设,利用光流跟踪法进行帧间小位移定位,同时利用传统的加速鲁棒特征点(SURF)进行帧间大位移匹配来矫正光流法结果.通... 针对城市平坦路面准确实时定位的问题,提出将光流跟踪法与特征点匹配进行卡尔曼融合的单目视觉里程计方法.基于平面假设,利用光流跟踪法进行帧间小位移定位,同时利用传统的加速鲁棒特征点(SURF)进行帧间大位移匹配来矫正光流法结果.通过卡尔曼滤波更新机器人的位置和姿态.结果表明,融合算法克服了光流法定位精度差和特征点匹配法处理速度慢的缺点,突出了光流法实时性和特征点匹配定位准确性的优点,该方法能够提供较准确的实时定位输出,并对光照变化和路面纹理较少的情况有一定的鲁棒性. 展开更多
关键词 单目视觉里程计 光流 特征点匹配 卡尔曼滤波
下载PDF
相机姿态安装误差对单目视觉定位精度的影响 被引量:12
14
作者 曹毓 冯莹 +1 位作者 赵立双 雷兵 《传感器与微系统》 CSCD 北大核心 2012年第12期23-26,30,共5页
现有单目视觉定位方法由于相机姿态误差而存在定位精度不高的问题,但鲜见文献对此进行定量分析。针对该问题从定性分析和定量仿真的角度研究了相机姿态角的安装误差对单目视觉定位精度的影响。在距离为211.377 m的平坦直线道路上进行了... 现有单目视觉定位方法由于相机姿态误差而存在定位精度不高的问题,但鲜见文献对此进行定量分析。针对该问题从定性分析和定量仿真的角度研究了相机姿态角的安装误差对单目视觉定位精度的影响。在距离为211.377 m的平坦直线道路上进行了三组实验,实验结果和仿真结果吻合较好,经相机姿态安装误差校正以后,三组实验中行驶轨迹弯曲的现象得到了纠正,并获得了最大误差0.45%的测距精度。所得结论对提高单目视觉定位精度具有指导意义。 展开更多
关键词 机器视觉 单目视觉定位 相机姿态测量 姿态安装误差
下载PDF
基于改进SURF算法的单目视觉里程计 被引量:8
15
作者 冉峰 李天 +1 位作者 季渊 刘万林 《电子测量技术》 2017年第5期185-188,200,共5页
针对传统单目视觉里程计在特征提取过程中误匹配点过多,匹配精度低、运算量大、提出了一种基于改进SURF算法的单目视觉里程计模型,首先使用SURF算法对单目摄像头采集的图像的相邻两帧进行特征点的检测与匹配,然后用RANSAC算法对误匹配... 针对传统单目视觉里程计在特征提取过程中误匹配点过多,匹配精度低、运算量大、提出了一种基于改进SURF算法的单目视觉里程计模型,首先使用SURF算法对单目摄像头采集的图像的相邻两帧进行特征点的检测与匹配,然后用RANSAC算法对误匹配点进行剔除,提高匹配的精度,减少运算量,最终求出相邻两帧图像特征点匹配的旋转矩阵R和平移向量T,完成运动估计。实验结果表明,该模型在预估曲线运动和直线运动时的运算速度分别提高了11.2%和10.38%。 展开更多
关键词 SURF算法 RANSAC算法 单目视觉里程计 旋转矩阵 平移向量
下载PDF
结合视觉里程计的微小型空中机器人SLAM研究 被引量:8
16
作者 任沁源 李平 《仪器仪表学报》 EI CAS CSCD 北大核心 2013年第2期475-480,共6页
经典的基于"平滑摄像机模型"的单目视觉同步定位与地图构建方法无法适用于具有复杂飞行模式的微小型空中机器人。针对这个问题,提出一种结合视觉里程计的单目视觉同步定位与地图构建方法。该方法通过视觉里程计直接估计机器... 经典的基于"平滑摄像机模型"的单目视觉同步定位与地图构建方法无法适用于具有复杂飞行模式的微小型空中机器人。针对这个问题,提出一种结合视觉里程计的单目视觉同步定位与地图构建方法。该方法通过视觉里程计直接估计机器人机载摄像机相对位姿变化,并将这些位姿信息嵌入基于EKF的单目视觉同步定位与地图构建算法中。同时,在采用视觉里程计进行位姿估计时,针对可能出现的退化问题,采用特征分类的策略,提高了估计的鲁棒性。将方法应用于一套真实的微小型智能无人直升机系统上,实验数据验证了方法具有良好的适用性和实用性。 展开更多
关键词 微小型空中机器人 单目视觉 视觉里程计 同步定位与地图构建
下载PDF
一种场景稳健的单目视觉里程计算法 被引量:2
17
作者 乌萌 郝金明 +2 位作者 高扬 刘婧 邹璐 《测绘科学技术学报》 北大核心 2019年第4期364-370,共7页
针对利用单目相机采集的图像序列进行实时车载平台位姿估计问题,对比了不同单目半直接典型算法的原理和试验结果以及不同场景、运动状态、光照和耦合因素下的同名点跟踪算法、长时间场景稳健的高精度位姿估计方法、位姿优化方法的试验... 针对利用单目相机采集的图像序列进行实时车载平台位姿估计问题,对比了不同单目半直接典型算法的原理和试验结果以及不同场景、运动状态、光照和耦合因素下的同名点跟踪算法、长时间场景稳健的高精度位姿估计方法、位姿优化方法的试验结果。通过与两个典型半直接MVO算法进行了计算过程多个阶段和计算结果多个方面的对比,得出每个阶段和整体结果更好的计算方法;最终总结提出了一种场景稳健的单目半直接视觉里程计算法并利用序列真实数据进行了试验验证。试验结果表明,该算法的长时间位姿估计的场景稳健性和计算精度均显著优于目前典型的半直接MVO算法,位姿估计精度比ERL算法提升10%以上,计算效率与典型的ERL算法相当,能够满足各类单目视觉里程计应用场景需求。 展开更多
关键词 单目视觉里程计 场景稳健性 重投影误差 半直接 位姿估计
下载PDF
单目视觉定位中SURF算法参数的优化 被引量:2
18
作者 赵立双 冯莹 曹毓 《计算机技术与发展》 2012年第6期6-9,共4页
为了提升单目视觉定位方法的定位效率,在基于SURF算法的单目视觉定位系统上对SURF算法参数的选取进行了优化。首先分析了路面图像的特点及路面图像中SURF特征点的特性,据此选取了SURF算法中组数和层数这两个重要参数;其次分析了路面序... 为了提升单目视觉定位方法的定位效率,在基于SURF算法的单目视觉定位系统上对SURF算法参数的选取进行了优化。首先分析了路面图像的特点及路面图像中SURF特征点的特性,据此选取了SURF算法中组数和层数这两个重要参数;其次分析了路面序列图像中特征点数目与hessian矩阵行列式阈值之间的关系,提出了hessian矩阵行列式阈值动态设定方法。通过对SURF算法参数的优化,有效降低了程序的运算量。实验结果表明,该方法能较好满足路面环境下定位的要求,在保证算法精度和稳定性的同时,大幅提高了程序的效率。 展开更多
关键词 机器视觉 单目视觉定位 SURF参数 hessian矩阵行列式阈值
下载PDF
基于单目/IMU/里程计融合的SLAM算法 被引量:4
19
作者 张福斌 张炳烁 杨玉帅 《兵工学报》 EI CAS CSCD 北大核心 2022年第11期2810-2818,共9页
常见的单目视觉-惯性SLAM算法,应用于以平面运动为主的轮式机器人时,由于存在额外不可观测度等原因常会导致导航定位精度下降。为解决该问题,提出一种能提高定位精度的视觉/IMU/里程计紧耦合的SLAM算法。在视觉前端部分,改进了原始图像... 常见的单目视觉-惯性SLAM算法,应用于以平面运动为主的轮式机器人时,由于存在额外不可观测度等原因常会导致导航定位精度下降。为解决该问题,提出一种能提高定位精度的视觉/IMU/里程计紧耦合的SLAM算法。在视觉前端部分,改进了原始图像金字塔LK光流法,将陀螺仪的旋转信息和里程计的平移信息作为先验,进行了可减少计算量的光流初值计算过程优化;引入车轮里程计信息,推导了IMU/里程计预积分;将里程计约束加入初始化过程和后端非线性优化中,实现视觉、IMU、里程计信息的充分融合利用。开源数据集测试和小车实验结果表明,新算法光流迭代次数减少约32.5%,定位误差均值相比VINS-Mono减少约40%。 展开更多
关键词 轮式机器人 单目相机 惯性测量单元 光流法 里程计 导航
下载PDF
基于IMU预积分封闭解的单目视觉惯性里程计算法 被引量:9
20
作者 徐晓苏 吴贤 《中国惯性技术学报》 EI CSCD 北大核心 2020年第4期440-447,共8页
将扩展卡尔曼滤波器作为后端的视觉惯性里程计算法由于其在实时性高的同时能保持较高的精度,从而被广泛地用于实际环境中。针对如何快速精确处理两帧图像之间的IMU数据的问题,提出了一种基于IMU预积分封闭解的算法,相较于传统基于优化... 将扩展卡尔曼滤波器作为后端的视觉惯性里程计算法由于其在实时性高的同时能保持较高的精度,从而被广泛地用于实际环境中。针对如何快速精确处理两帧图像之间的IMU数据的问题,提出了一种基于IMU预积分封闭解的算法,相较于传统基于优化的视觉惯性里程计算法在分段常数加速近似下采用离散四元数积分来简化所需的预积分值,IMU预积分封闭解算法在IMU时间周期内求解解析解,并应用于多状态约束下的卡尔曼滤波器(MSCKF)视觉惯性里程计框架下,来提高系统定位的精度。针对MSCKF算法观测方程参数化方法存在的数值稳定性的问题,提出了一种逆深度的参数化方法,克服了MSCKF算法在空间点坐标z轴深度值趋近于零时,系统观测值会出现奇点的情况,有效增加系统的鲁棒性。在公开EuRoc数据集六个飞行序列上的试验结果表明,所提出算法相较于传统的MSCKF视觉惯性里程计算法漂移较小,均方根误差减少约36.5%,定位精度得到有效提升。 展开更多
关键词 多状态约束下卡尔曼滤波器 单目视觉惯性里程计 IMU预积分封闭解 逆深度参数化
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部