We present a novel approach to automatically recover, from a small set of partially overlapping spherical images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment ...We present a novel approach to automatically recover, from a small set of partially overlapping spherical images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. We introduce several improvements over previous approaches based on color and spatial reasoning exploiting Manhattan world priors. In particular, we introduce a new method for geometric context extraction based on a 3D facet representation,which combines color distribution analysis of individual images with sparse multi-view clues. We also introduce an efficient method to combine the facets from different viewpoints in a single consistent model, taking into the reliability of the facet information. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments in cases where most previous approaches fail, e.g., in the presence of hidden corners and large clutter, without the need for additional dense 3D data or tools. We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes. Our test data is available to allow further studies and comparisons.展开更多
基金partially supported by projects VIGEC and 3DCLOUDPRO
文摘We present a novel approach to automatically recover, from a small set of partially overlapping spherical images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. We introduce several improvements over previous approaches based on color and spatial reasoning exploiting Manhattan world priors. In particular, we introduce a new method for geometric context extraction based on a 3D facet representation,which combines color distribution analysis of individual images with sparse multi-view clues. We also introduce an efficient method to combine the facets from different viewpoints in a single consistent model, taking into the reliability of the facet information. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments in cases where most previous approaches fail, e.g., in the presence of hidden corners and large clutter, without the need for additional dense 3D data or tools. We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes. Our test data is available to allow further studies and comparisons.