This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real...This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.展开更多
Efficient perception of the real world is a long-standing effort of computer vision.Mod⁃ern visual computing techniques have succeeded in attaching semantic labels to thousands of daily objects and reconstructing dens...Efficient perception of the real world is a long-standing effort of computer vision.Mod⁃ern visual computing techniques have succeeded in attaching semantic labels to thousands of daily objects and reconstructing dense depth maps of complex scenes.However,simultaneous se⁃mantic and spatial joint perception,so-called dense 3D semantic mapping,estimating the 3D ge⁃ometry of a scene and attaching semantic labels to the geometry,remains a challenging problem that,if solved,would make structured vision understanding and editing more widely accessible.Concurrently,progress in computer vision and machine learning has motivated us to pursue the capability of understanding and digitally reconstructing the surrounding world.Neural metric-se⁃mantic understanding is a new and rapidly emerging field that combines differentiable machine learning techniques with physical knowledge from computer vision,e.g.,the integration of visualinertial simultaneous localization and mapping(SLAM),mesh reconstruction,and semantic un⁃derstanding.In this paper,we attempt to summarize the recent trends and applications of neural metric-semantic understanding.Starting with an overview of the underlying computer vision and machine learning concepts,we discuss critical aspects of such perception approaches.Specifical⁃ly,our emphasis is on fully leveraging the joint semantic and 3D information.Later on,many im⁃portant applications of the perception capability such as novel view synthesis and semantic aug⁃mented reality(AR)contents manipulation are also presented.Finally,we conclude with a dis⁃cussion of the technical implications of the technology under a 5G edge computing scenario.展开更多
基金This work was supported in part by the National Natural Science Foundation of China under Grant U20A20225,61833013in part by Shaanxi Provincial Key Research and Development Program under Grant 2022-GY111.
文摘This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.
文摘Efficient perception of the real world is a long-standing effort of computer vision.Mod⁃ern visual computing techniques have succeeded in attaching semantic labels to thousands of daily objects and reconstructing dense depth maps of complex scenes.However,simultaneous se⁃mantic and spatial joint perception,so-called dense 3D semantic mapping,estimating the 3D ge⁃ometry of a scene and attaching semantic labels to the geometry,remains a challenging problem that,if solved,would make structured vision understanding and editing more widely accessible.Concurrently,progress in computer vision and machine learning has motivated us to pursue the capability of understanding and digitally reconstructing the surrounding world.Neural metric-se⁃mantic understanding is a new and rapidly emerging field that combines differentiable machine learning techniques with physical knowledge from computer vision,e.g.,the integration of visualinertial simultaneous localization and mapping(SLAM),mesh reconstruction,and semantic un⁃derstanding.In this paper,we attempt to summarize the recent trends and applications of neural metric-semantic understanding.Starting with an overview of the underlying computer vision and machine learning concepts,we discuss critical aspects of such perception approaches.Specifical⁃ly,our emphasis is on fully leveraging the joint semantic and 3D information.Later on,many im⁃portant applications of the perception capability such as novel view synthesis and semantic aug⁃mented reality(AR)contents manipulation are also presented.Finally,we conclude with a dis⁃cussion of the technical implications of the technology under a 5G edge computing scenario.