期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
RB-SLAM:visual SLAM based on rotated BEBLID feature point description
1
作者 Fan Xinyue Wu Kai Chen Shuai 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2023年第3期1-13,共13页
The extraction and description of image features are very important for visual simultaneous localization and mapping(V-SLAM).A rotated boosted efficient binary local image descriptor(BEBLID)SLAM(RB-SLAM)algorithm base... The extraction and description of image features are very important for visual simultaneous localization and mapping(V-SLAM).A rotated boosted efficient binary local image descriptor(BEBLID)SLAM(RB-SLAM)algorithm based on improved oriented fast and rotated brief(ORB)feature description is proposed in this paper,which can solve the problems of low localization accuracy and time efficiency of the current ORB-SLAM3 algorithm.Firstly,it uses the BEBLID to replace the feature point description algorithm of the original ORB to enhance the expressiveness and description efficiency of the image.Secondly,it adds rotational invariance to the BEBLID using the orientation information of the feature points.It also selects the rotationally stable bits in the BEBLID to further enhance the rotational invariance of the BEBLID.Finally,it retrains the binary visual dictionary based on the BEBLID to reduce the cumulative error of V-SLAM and improve the loading speed of the visual dictionary.Experiments show that the dictionary loading efficiency is improved by more than 10 times.The RB-SLAM algorithm improves the trajectory accuracy by 24.75%on the TUM dataset and 26.25%on the EuRoC dataset compared to the ORB-SLAM3 algorithm. 展开更多
关键词 visual simultaneous localization and mapping(V-SLAM) oriented fast and rotated brief(ORB) feature extraction boosted efficient binary local image descriptor(BEBLID) rotational invariance
原文传递
基于深度学习的室内动态场景下视觉SLAM技术研究
2
作者 郑晓华 耿鑫雷 邓浩坤 《测绘地理信息》 CSCD 2024年第2期51-55,共5页
视觉同步定位与建图(visual simultaneous localization and mapping,VSLAM)技术是近年来机器人和计算机视觉领域的重点研究方向之一,但当前的主流算法主要面向静态环境,当场景中存在运动的物体时,算法的定位精度和稳定性会受到很大影... 视觉同步定位与建图(visual simultaneous localization and mapping,VSLAM)技术是近年来机器人和计算机视觉领域的重点研究方向之一,但当前的主流算法主要面向静态环境,当场景中存在运动的物体时,算法的定位精度和稳定性会受到很大影响。为了解决上述问题,提出了一种惯性测量单元(inertial measurement unit,IMU)积分与YOLOv4语义分割结合的VSLAM前端动态特征点剔除算法,通过YOLOv4网络对图像进行语义分割,识别图像中有运动可能的物体;再将IMU积分与语义分割结合,对目标检测框内有运动可能的特征点进行重投影误差的解算,识别并剔除环境中运动的特征点。在TUM Visual-Inertial Dataset上验证该算法,结果表明,在包含运动物体的室内场景下,该算法可以有效剔除环境中的运动物体,显著提升SLAM系统的定位精度和稳定性。 展开更多
关键词 视觉同步定位与建图(visual simultaneous localization and mapping VSLAM) 特征点 动态目标 深度学习
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部