ORB-SLAM3(simultaneous localization and mapping)是当前最优秀的视觉SLAM算法之一,然而其基于静态环境的假设导致算法在高动态环境下精度不佳甚至定位失败。针对这一问题,提出一种结合光流和实例分割的动态特征点去除方法,以提高ORB-...ORB-SLAM3(simultaneous localization and mapping)是当前最优秀的视觉SLAM算法之一,然而其基于静态环境的假设导致算法在高动态环境下精度不佳甚至定位失败。针对这一问题,提出一种结合光流和实例分割的动态特征点去除方法,以提高ORB-SLAM3在高动态环境下的定位精度。并且在TUM数据集上进行了RGB-D相机模式和单目相机模式的实验,实验结果表明了该方法的有效性。展开更多
针对视觉SLAM(Simultaneous Localization and Mapping)在真实场景下出现动态物体(如行人,车辆、动物)等影响算法定位和建图精确性的问题,基于ORB-SLAM3(Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping 3)提出...针对视觉SLAM(Simultaneous Localization and Mapping)在真实场景下出现动态物体(如行人,车辆、动物)等影响算法定位和建图精确性的问题,基于ORB-SLAM3(Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping 3)提出了YOLOv3-ORB-SLAM3算法。该算法在ORB-SLAM3的基础上增加了语义线程,采用动态和静态场景特征提取双线程机制:语义线程使用YOLOv3对场景中动态物体进行语义识别目标检测,同时对提取的动态区域特征点进行离群点剔除;跟踪线程通过ORB特征提取场景区域特征,结合语义信息获得静态场景特征送入后端,从而消除动态场景对系统的干扰,提升视觉SLAM算法定位精度。利用TUM(Technical University of Munich)数据集验证,结果表明YOLOv3-ORB-SLAM3算法在单目模式下动态序列相比ORB-SLAM3算法ATE(Average Treatment Effect)指标下降30%左右,RGB-D(Red,Green and Blue-Depth)模式下动态序列ATE指标下降10%,静态序列未有明显下降。展开更多
针对光线强度对机器人视觉同步定位与地图构建(Simultaneous Localization and Mapping,SLAM)建图信息量、时效性和鲁棒性影响大的问题,提出一种基于激光雷达(Light Detection And Ranging,LiDAR)增强的视觉SLAM多机器人协作地图构建方...针对光线强度对机器人视觉同步定位与地图构建(Simultaneous Localization and Mapping,SLAM)建图信息量、时效性和鲁棒性影响大的问题,提出一种基于激光雷达(Light Detection And Ranging,LiDAR)增强的视觉SLAM多机器人协作地图构建方法。在地图构建过程中,将LiDAR深度测量值集成到现有的特征点检测和特征描述子同步定位与地图构建(Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping,ORB-SLAM3)算法中,利用改进的扩展卡尔曼滤波算法将激光雷达的高精度数据和视觉传感器的时序信息融合在一起,获得单个机器人的位姿状态,结合深度图进行单个机器人稠密点云地图的构建;利用关键帧跟踪模型和迭代最近点(Iterative Closest Point,ICP)算法得到存在共识关系的机器人之间的坐标转换关系,进而得到各机器人的世界坐标系,在世界坐标系中实现多机器人协作地图的融合与构建。在Gazebo仿真平台中实验验证了方法的时效性和鲁棒性。展开更多
Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a ...Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a broad field of vision and more abundant information on images,so it is widely used in mobile robots,including legged robots.However,wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems,and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras.In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation,a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed.For the predictability of the periodic motion of a legged robot,in the method,the images are sampled periodically,image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image.Then,the feature points of the blocks are extracted and by using the feature points of the blocks in the images,the feature points on the images are extracted.Finally,the points on the incident light through the normalised plane are selected as the template points;the relationship between the template points and the images is established through the wide-angle camera model,and the pixel coordinates of the template points in the images and the descriptors are calculated.Moreover,many experiments are conducted on the TUM datasets with a quadruped robot.The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO,ORB-SLAM3 and Periodic SLAM systems.展开更多
文摘ORB-SLAM3(simultaneous localization and mapping)是当前最优秀的视觉SLAM算法之一,然而其基于静态环境的假设导致算法在高动态环境下精度不佳甚至定位失败。针对这一问题,提出一种结合光流和实例分割的动态特征点去除方法,以提高ORB-SLAM3在高动态环境下的定位精度。并且在TUM数据集上进行了RGB-D相机模式和单目相机模式的实验,实验结果表明了该方法的有效性。
文摘针对视觉SLAM(Simultaneous Localization and Mapping)在真实场景下出现动态物体(如行人,车辆、动物)等影响算法定位和建图精确性的问题,基于ORB-SLAM3(Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping 3)提出了YOLOv3-ORB-SLAM3算法。该算法在ORB-SLAM3的基础上增加了语义线程,采用动态和静态场景特征提取双线程机制:语义线程使用YOLOv3对场景中动态物体进行语义识别目标检测,同时对提取的动态区域特征点进行离群点剔除;跟踪线程通过ORB特征提取场景区域特征,结合语义信息获得静态场景特征送入后端,从而消除动态场景对系统的干扰,提升视觉SLAM算法定位精度。利用TUM(Technical University of Munich)数据集验证,结果表明YOLOv3-ORB-SLAM3算法在单目模式下动态序列相比ORB-SLAM3算法ATE(Average Treatment Effect)指标下降30%左右,RGB-D(Red,Green and Blue-Depth)模式下动态序列ATE指标下降10%,静态序列未有明显下降。
文摘针对光线强度对机器人视觉同步定位与地图构建(Simultaneous Localization and Mapping,SLAM)建图信息量、时效性和鲁棒性影响大的问题,提出一种基于激光雷达(Light Detection And Ranging,LiDAR)增强的视觉SLAM多机器人协作地图构建方法。在地图构建过程中,将LiDAR深度测量值集成到现有的特征点检测和特征描述子同步定位与地图构建(Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping,ORB-SLAM3)算法中,利用改进的扩展卡尔曼滤波算法将激光雷达的高精度数据和视觉传感器的时序信息融合在一起,获得单个机器人的位姿状态,结合深度图进行单个机器人稠密点云地图的构建;利用关键帧跟踪模型和迭代最近点(Iterative Closest Point,ICP)算法得到存在共识关系的机器人之间的坐标转换关系,进而得到各机器人的世界坐标系,在世界坐标系中实现多机器人协作地图的融合与构建。在Gazebo仿真平台中实验验证了方法的时效性和鲁棒性。
基金National Natural Science Foundation of China,Grant/Award Number:61702320。
文摘Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a broad field of vision and more abundant information on images,so it is widely used in mobile robots,including legged robots.However,wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems,and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras.In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation,a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed.For the predictability of the periodic motion of a legged robot,in the method,the images are sampled periodically,image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image.Then,the feature points of the blocks are extracted and by using the feature points of the blocks in the images,the feature points on the images are extracted.Finally,the points on the incident light through the normalised plane are selected as the template points;the relationship between the template points and the images is established through the wide-angle camera model,and the pixel coordinates of the template points in the images and the descriptors are calculated.Moreover,many experiments are conducted on the TUM datasets with a quadruped robot.The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO,ORB-SLAM3 and Periodic SLAM systems.