The design and implementation of indoor security robot can well integrate the two fields of indoor navigation and object detection, in order to achieve a more powerful robot system, the development of this project has...The design and implementation of indoor security robot can well integrate the two fields of indoor navigation and object detection, in order to achieve a more powerful robot system, the development of this project has certain theoretical research significance and practical application value. The project development is completed in ROS (Robot Operating System). The main tools or frameworks used include AMCL (Adaptive Monte Carlo Localization) package, SLAM (Simultaneous Localization and Mapping) algorithm, Darknet deep learning framework, YOLOv3 (You Only Look Once)algorithm, etc. The main development methods include odometer information fusion, coordinate transformation, localization and mapping, path planning, YOLOv3 model training, function package configuration and deployment. Indoor security robot has two main functions: first, it can complete real-time localization, mapping and navigation of indoor environment through sensors such as lidar and camera;Second, object detection is accomplished through USB camera. Through the detailed analysis and research of the functional design of the two modules, the expected function is finally realized, which can meet the daily use needs.展开更多
The indoor robots are expected to complete metric navigation tasks safely and efficiently in complex environments, which is the essential prerequisite for accomplishing other high-level operation tasks. 2 D occupancy ...The indoor robots are expected to complete metric navigation tasks safely and efficiently in complex environments, which is the essential prerequisite for accomplishing other high-level operation tasks. 2 D occupancy grid maps are sufficient to support the robots in avoiding all obstacles in the environments during navigation. However, the maps based on normal laser scans only reflect a horizontal slice of the environment, which may cause the problem of some obstacles missing or misinterpreting their exact boundaries,thereby threatening the safety and efficiency of robot navigation. This paper presents a 2 D mapping method based on virtual laser scans to provide a more comprehensive representation of obstacles for indoor robot navigation. The resulting maps can accurately represent the top-down projected contours of all obstacles no matter where their vertical positions are. The virtual laser scans are initially generated from raw data of an RGB-D camera based on the filtering, projection, and polar-coordinate scanning. The scans are fed directly to the laser-based simultaneous localization and mapping(SLAM) algorithms to update the current map and robot position. Two auxiliary strategies are proposed to further improve the quality of maps by reducing the impact of the narrow field of view and the blind zone of the RGB-D camera on the observations. In this paper, the improved virtual laser generation method makes the extracted 2 D observations fit the laser-based SLAM algorithms, and two auxiliary strategies are novel ways to improve map quality. The generated maps can reflect the comprehensive obstacle information in indoor environments with good accuracy. The comparative experiments are carried out based on four simulation scenarios and three real-world scenarios to prove the effectiveness of our 2 D mapping method.展开更多
It is discussed with the design and implementation of an architecture for a mobile robot to navigate in dynamic and anknown indoor environments. The architecture is based on the framework of Open Robot Control Softwar...It is discussed with the design and implementation of an architecture for a mobile robot to navigate in dynamic and anknown indoor environments. The architecture is based on the framework of Open Robot Control Software at KTH (OROCOS@KTH), which is also discussed and evaluated to navigate indoor efficiently, a new algorithm named door-like-exit detection is proposed which employs 2D feature oft. door and extracts key points of pathway from the raw data of a laser scanner. As a hybrid architecture, it is decomposed into several basic components which can be classified as either deliberative or reactive. Each component can concurrently execute and communicate with another. It is expansible and transferable and its components are reusable.展开更多
This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real...This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.展开更多
为提高超宽带(ultra-wideband,UWB)技术在非视距(non line of sight,NLOS)环境下的定位精度,提出一种基于粒子滤波融合视觉与UWB数据的定位方法。视觉模块通过识别与检测标签推算出绝对位姿;UWB模块鉴别由NLOS条件干扰的测距值,筛选最...为提高超宽带(ultra-wideband,UWB)技术在非视距(non line of sight,NLOS)环境下的定位精度,提出一种基于粒子滤波融合视觉与UWB数据的定位方法。视觉模块通过识别与检测标签推算出绝对位姿;UWB模块鉴别由NLOS条件干扰的测距值,筛选最优测距值进行自适应权重的定位算法,提升覆盖区域的整体定位精度;基于粒子滤波将两者的实时定位信息进行数据融合。实验结果表明,融合定位方法具有实时性和鲁棒性,有效抑制了NLOS环境引起的误差,在NLOS环境下定位精度能够达到0.26 m。展开更多
文摘The design and implementation of indoor security robot can well integrate the two fields of indoor navigation and object detection, in order to achieve a more powerful robot system, the development of this project has certain theoretical research significance and practical application value. The project development is completed in ROS (Robot Operating System). The main tools or frameworks used include AMCL (Adaptive Monte Carlo Localization) package, SLAM (Simultaneous Localization and Mapping) algorithm, Darknet deep learning framework, YOLOv3 (You Only Look Once)algorithm, etc. The main development methods include odometer information fusion, coordinate transformation, localization and mapping, path planning, YOLOv3 model training, function package configuration and deployment. Indoor security robot has two main functions: first, it can complete real-time localization, mapping and navigation of indoor environment through sensors such as lidar and camera;Second, object detection is accomplished through USB camera. Through the detailed analysis and research of the functional design of the two modules, the expected function is finally realized, which can meet the daily use needs.
基金supported by National Natural Science Foundation of China(Nos.U1813215 and 61773239)the Taishan Scholars Program of Shandong Province(No.ts201511005)。
文摘The indoor robots are expected to complete metric navigation tasks safely and efficiently in complex environments, which is the essential prerequisite for accomplishing other high-level operation tasks. 2 D occupancy grid maps are sufficient to support the robots in avoiding all obstacles in the environments during navigation. However, the maps based on normal laser scans only reflect a horizontal slice of the environment, which may cause the problem of some obstacles missing or misinterpreting their exact boundaries,thereby threatening the safety and efficiency of robot navigation. This paper presents a 2 D mapping method based on virtual laser scans to provide a more comprehensive representation of obstacles for indoor robot navigation. The resulting maps can accurately represent the top-down projected contours of all obstacles no matter where their vertical positions are. The virtual laser scans are initially generated from raw data of an RGB-D camera based on the filtering, projection, and polar-coordinate scanning. The scans are fed directly to the laser-based simultaneous localization and mapping(SLAM) algorithms to update the current map and robot position. Two auxiliary strategies are proposed to further improve the quality of maps by reducing the impact of the narrow field of view and the blind zone of the RGB-D camera on the observations. In this paper, the improved virtual laser generation method makes the extracted 2 D observations fit the laser-based SLAM algorithms, and two auxiliary strategies are novel ways to improve map quality. The generated maps can reflect the comprehensive obstacle information in indoor environments with good accuracy. The comparative experiments are carried out based on four simulation scenarios and three real-world scenarios to prove the effectiveness of our 2 D mapping method.
基金The project is supported by European Open Robot Control Software Founda-tion(No.IST-2000-31064), National Natural Science Foundation of China(No.60475031) and the Swedish Foundation for Strategic Research, Sweden.
文摘It is discussed with the design and implementation of an architecture for a mobile robot to navigate in dynamic and anknown indoor environments. The architecture is based on the framework of Open Robot Control Software at KTH (OROCOS@KTH), which is also discussed and evaluated to navigate indoor efficiently, a new algorithm named door-like-exit detection is proposed which employs 2D feature oft. door and extracts key points of pathway from the raw data of a laser scanner. As a hybrid architecture, it is decomposed into several basic components which can be classified as either deliberative or reactive. Each component can concurrently execute and communicate with another. It is expansible and transferable and its components are reusable.
基金This work was supported in part by the National Natural Science Foundation of China under Grant U20A20225,61833013in part by Shaanxi Provincial Key Research and Development Program under Grant 2022-GY111.
文摘This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.
文摘为提高超宽带(ultra-wideband,UWB)技术在非视距(non line of sight,NLOS)环境下的定位精度,提出一种基于粒子滤波融合视觉与UWB数据的定位方法。视觉模块通过识别与检测标签推算出绝对位姿;UWB模块鉴别由NLOS条件干扰的测距值,筛选最优测距值进行自适应权重的定位算法,提升覆盖区域的整体定位精度;基于粒子滤波将两者的实时定位信息进行数据融合。实验结果表明,融合定位方法具有实时性和鲁棒性,有效抑制了NLOS环境引起的误差,在NLOS环境下定位精度能够达到0.26 m。