摘要
3D model retrieval virtual reality applications. In can benefit many downstream this paper, we propose a new sketch-based 3D model retrieval framework by coupling local features and manifold ranking. At technical fronts, we exploit spatial pyramids based local structures to facilitate the efficient construction of feature descriptors. Meanwhile, we propose an improved manifold ranking method, wherein all the categories between arbitrary model pairs will be taken into account. Since the smooth and detail-preserving line drawings of 3D model are important for sketch-based 3D model retrieval, the Difference of Gaussians (DOG) method is employed to extract the line drawings over the projected depth images of 3D model, and Bezier Curve is then adopted to further optimize the extracted line drawing. On that basis, we develop a 3D model retrieval engine to verify our method. We have conducted extensive experiments over various public benchmarks, and have made comprehensive comparisons with some state-of-the-art 3D retrieval methods. All the evaluation results based on the widely-used indicators prove the superiority of our method in accuracy, reliability, robustness, and versatility.
3D model retrieval virtual reality applications. In can benefit many downstream this paper, we propose a new sketch-based 3D model retrieval framework by coupling local features and manifold ranking. At technical fronts, we exploit spatial pyramids based local structures to facilitate the efficient construction of feature descriptors. Meanwhile, we propose an improved manifold ranking method, wherein all the categories between arbitrary model pairs will be taken into account. Since the smooth and detail-preserving line drawings of 3D model are important for sketch-based 3D model retrieval, the Difference of Gaussians (DOG) method is employed to extract the line drawings over the projected depth images of 3D model, and Bezier Curve is then adopted to further optimize the extracted line drawing. On that basis, we develop a 3D model retrieval engine to verify our method. We have conducted extensive experiments over various public benchmarks, and have made comprehensive comparisons with some state-of-the-art 3D retrieval methods. All the evaluation results based on the widely-used indicators prove the superiority of our method in accuracy, reliability, robustness, and versatility.
基金
The authors would like to thank Zhang Dongdong for his great help in experiments. This work was supported by the National Natural Science Foundation of China (Grant No. 61602324), the Scientific Research Project of Beijing Educational Committeen (KM201710028018), the open funding project of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (BUAA-VR-17KF-12) and Beijing Advanced Innovation Center for Imaging Technology (BAlCIT-2016004).