Obstacle detection is essential for mobile robots to avoid collision with obstacles.Mobile robots usually operate in indoor environments,where they encounter various kinds of obstacles;however,2D range sensor can sens...Obstacle detection is essential for mobile robots to avoid collision with obstacles.Mobile robots usually operate in indoor environments,where they encounter various kinds of obstacles;however,2D range sensor can sense obstacles only in 2D plane.In contrast,by using 3D range sensor,it is possible to detect ground and aerial obstacles that 2D range sensor cannot sense.In this paper,we present a 3D obstacle detection method that will help overcome the limitations of 2D range sensor with regard to obstacle detection.The indoor environment typically consists of a flat floor.The position of the floor can be determined by estimating the plane using the least squares method.Having determined the position of the floor,the points of obstacles can be known by rejecting the points of the floor.In the experimental section,we show the results of this approach using a Kinect sensor.展开更多
This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real...This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.展开更多
Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a ...Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a broad field of vision and more abundant information on images,so it is widely used in mobile robots,including legged robots.However,wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems,and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras.In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation,a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed.For the predictability of the periodic motion of a legged robot,in the method,the images are sampled periodically,image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image.Then,the feature points of the blocks are extracted and by using the feature points of the blocks in the images,the feature points on the images are extracted.Finally,the points on the incident light through the normalised plane are selected as the template points;the relationship between the template points and the images is established through the wide-angle camera model,and the pixel coordinates of the template points in the images and the descriptors are calculated.Moreover,many experiments are conducted on the TUM datasets with a quadruped robot.The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO,ORB-SLAM3 and Periodic SLAM systems.展开更多
基金This work was supported in part by the Foundation of Guangdong Educational Committee (2014KTSCX191) and the National Natural Science Foundation of China (61201087).
基金The MKE(Ministry of Knowledge Economy),Korea,under the ITRC(Information Technology Research Center)support program(NIPA-2013-H0301-13-2006)supervised by the NIPA(National IT Industry Promotion Agency)The National Research Foundation of Korea(NRF)grant funded by the Korea government(MEST)(2013-029812)The MKE(Ministry of Knowledge Economy),Korea,under the Human Resources Development Program for Convergence Robot Specialists support program supervised by the NIPA(NIPA-2013-H1502-13-1001)
文摘Obstacle detection is essential for mobile robots to avoid collision with obstacles.Mobile robots usually operate in indoor environments,where they encounter various kinds of obstacles;however,2D range sensor can sense obstacles only in 2D plane.In contrast,by using 3D range sensor,it is possible to detect ground and aerial obstacles that 2D range sensor cannot sense.In this paper,we present a 3D obstacle detection method that will help overcome the limitations of 2D range sensor with regard to obstacle detection.The indoor environment typically consists of a flat floor.The position of the floor can be determined by estimating the plane using the least squares method.Having determined the position of the floor,the points of obstacles can be known by rejecting the points of the floor.In the experimental section,we show the results of this approach using a Kinect sensor.
基金This work was supported in part by the National Natural Science Foundation of China under Grant U20A20225,61833013in part by Shaanxi Provincial Key Research and Development Program under Grant 2022-GY111.
文摘This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.
基金National Natural Science Foundation of China,Grant/Award Number:61702320。
文摘Precise localisation and navigation are the two most important tasks for mobile robots.Visual simultaneous localisation and mapping(VSLAM)is useful in localisation systems of mobile robots.The wide-angle camera has a broad field of vision and more abundant information on images,so it is widely used in mobile robots,including legged robots.However,wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems,and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras.In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation,a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed.For the predictability of the periodic motion of a legged robot,in the method,the images are sampled periodically,image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image.Then,the feature points of the blocks are extracted and by using the feature points of the blocks in the images,the feature points on the images are extracted.Finally,the points on the incident light through the normalised plane are selected as the template points;the relationship between the template points and the images is established through the wide-angle camera model,and the pixel coordinates of the template points in the images and the descriptors are calculated.Moreover,many experiments are conducted on the TUM datasets with a quadruped robot.The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO,ORB-SLAM3 and Periodic SLAM systems.