In multi-view stereo,unreliable matching in low-textured regions has a negative impact on the completeness of reconstructed models.Since the photometric consistency of low-textured regions is not discriminative under ...In multi-view stereo,unreliable matching in low-textured regions has a negative impact on the completeness of reconstructed models.Since the photometric consistency of low-textured regions is not discriminative under a local window,non-local information provided by the Markov Random Field(MRF)model can alleviate the matching ambiguity but is limited in continuous space with high computational complexity.Owing to its sampling and propagation strategy,PatchMatch multi-view stereo methods have advantages in terms of optimizing the continuous labeling problem.In this paper,we propose a novel method to address this problem,namely the Coarse-Hypotheses Guided Non-Local PAtchMatch Multi-View Stereo(CNLPA-MVS),which takes the advantages of both MRF-based non-local methods and PatchMatch multi-view stereo and compensates for their defects mutually.First,we combine dynamic programing(DP)and sequential propagation along scanlines in parallel to perform CNLPA-MVS,thereby obtaining the optimal depth and normal hypotheses.Second,we introduce coarse inference within a universal window provided by winner-takes-all to eliminate the stripe artifacts caused by DP and improve completeness.Third,we add a local consistency strategy based on the hypotheses of similar color pixels sharing approximate values into CNLPA-MVS for further improving completeness.CNLPA-MVS was validated on public benchmarks and achieved state-of-the-art performance with high completeness.展开更多
基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,...基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。展开更多
现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transfor...现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transformer的MVS网络,在不增加额外计算的情况下处理不同阶段的特征表示。引入一种自适应匹配感知Transformer(AMT),在多个尺度上使用不同的交互式注意力组合。这种组合策略使所提网络能够捕捉图像内部的上下文信息,并增强图像之间的特征关系。此外,设计双特征引导聚合(DFGA),将粗糙的全局语义信息嵌入到更精细的代价体构建中,以进一步增强全局和局部特征的感知。同时,通过设计一种特征度量损失,用于评估变换前后的特征偏差,以减少特征错误匹配对深度估计的影响。实验结果表明,在DTU数据集中,所提网络的完整度和整体度量达到0.264、0.302,在Tanks and temples 2个大场景的重建平均值分别达到64.28、38.03。展开更多
Learning-based multi-view stereo(MVS)algorithms have demonstrated great potential for depth estimation in recent years.However,they still struggle to estimate accurate depth in texture-less planar regions,which limits...Learning-based multi-view stereo(MVS)algorithms have demonstrated great potential for depth estimation in recent years.However,they still struggle to estimate accurate depth in texture-less planar regions,which limits their reconstruction perform-ance in man-made scenes.In this paper,we propose PlaneStereo,a new framework that utilizes planar prior to facilitate the depth estim-ation.Our key intuition is that pixels inside a plane share the same set of plane parameters,which can be estimated collectively using in-formation inside the whole plane.Specifically,our method first segments planes in the reference image,and then fits 3D plane paramet-ers for each segmented plane by solving a linear system using high-confidence depth predictions inside the plane.This allows us to recov-er the plane parameters accurately,which can be converted to accurate depth values for each point in the plane,improving the depth prediction for low-textured local regions.This process is fully differentiable and can be integrated into existing learning-based MVS al-gorithms.Experiments show that using our method consistently improves the performance of existing stereo matching and MVS al-gorithms on DeMoN and ScanNet datasets,achieving state-of-the-art performance.展开更多
In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the...In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo(MVS)techniques.Our contribution is selection of reliable samples of lighting,surface normal,and viewing directions for robustness against estimation errors of MVS.Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments.展开更多
Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. I...Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline.展开更多
Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from ...Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF(Sparse-Input Generalized Neural Radiance Fields).Firstly,we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images,and then these features are aggregated by multi-head attention as the input of the neural radiance fields.This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference,thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes.We tested the generalization ability on DTU dataset,and our PSNR(peak signal-to-noise ratio)improved by 3.14 compared with the baseline method under the same input conditions.In addition,if the scene has dense input views available,the average PSNR can be improved by 1.04 through further refinement training in a short time,and a higher quality rendering effect can be obtained.展开更多
目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征...目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征,在每一级特征提取模块中均加入注意力层,以捕获深度推理任务的远程依赖关系;然后通过可微分单应性变换构建参考视锥的特征量,并构建代价体;最后利用多层U-Net体系结构正则化代价体,并通过回归结合参考图像边缘信息生成最终的细化深度图。结果在DTU(Technical University of Denmark)数据集上进行测试,与现有的几种方法相比,本文方法相较于Colmap、Gipuma和Tola方法,整体性指标分别提高8.5%、13.1%和31.9%,完整性指标分别提高20.7%、41.6%和73.3%;相较于Camp、Furu和Surface Net方法,整体性指标分别提高24.8%、33%和29.8%,准确性指标分别提高39.8%、17.6%和1.3%,完整性指标分别提高9.7%、48.4%和58.3%;相较于Pru Mvsnet方法,整体性指标提高1.7%,准确性指标提高5.8%;相较于Mvsnet方法,整体性指标提高1.5%,完整性标提高7%。结论在DTU数据集上的测试结果表明,本文提出的网络架构在整体性指标上得到了目前最优的结果,完整性和准确性指标得到较大提升,3D重建质量更好。展开更多
Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseli...Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseline RGB references is a challenging problem that lacks efficient solutions with existing novel view synthesis techniques.In this work,we aim at truthfully rendering local immersive novel views/LF images based on large baseline LF captures and a single RGB image in the target view.To fully explore the precious information from source LF captures,we propose a novel occlusion-aware source sampler(OSS)module which efficiently transfers the pixels of source views to the target view′s frustum in an occlusion-aware manner.An attention-based deep visual fusion module is proposed to fuse the revealed occluded background content with a preliminary LF into a final refined LF.The proposed source sampling and fusion mechanism not only helps to provide information for occluded regions from varying observation angles,but also proves to be able to effectively enhance the visual rendering quality.Experimental results show that our proposed method is able to render high-quality LF images/novel views with sparse RGB references and outperforms state-of-the-art LF rendering and novel view synthesis methods.展开更多
基金supported by the National Natural Science Foundation of China under Grant Nos.61732015,61932018,and 61472349the National Key Research and Development Program of China under Grant No.2017YFB0202203.
文摘In multi-view stereo,unreliable matching in low-textured regions has a negative impact on the completeness of reconstructed models.Since the photometric consistency of low-textured regions is not discriminative under a local window,non-local information provided by the Markov Random Field(MRF)model can alleviate the matching ambiguity but is limited in continuous space with high computational complexity.Owing to its sampling and propagation strategy,PatchMatch multi-view stereo methods have advantages in terms of optimizing the continuous labeling problem.In this paper,we propose a novel method to address this problem,namely the Coarse-Hypotheses Guided Non-Local PAtchMatch Multi-View Stereo(CNLPA-MVS),which takes the advantages of both MRF-based non-local methods and PatchMatch multi-view stereo and compensates for their defects mutually.First,we combine dynamic programing(DP)and sequential propagation along scanlines in parallel to perform CNLPA-MVS,thereby obtaining the optimal depth and normal hypotheses.Second,we introduce coarse inference within a universal window provided by winner-takes-all to eliminate the stripe artifacts caused by DP and improve completeness.Third,we add a local consistency strategy based on the hypotheses of similar color pixels sharing approximate values into CNLPA-MVS for further improving completeness.CNLPA-MVS was validated on public benchmarks and achieved state-of-the-art performance with high completeness.
文摘基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。
文摘现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transformer的MVS网络,在不增加额外计算的情况下处理不同阶段的特征表示。引入一种自适应匹配感知Transformer(AMT),在多个尺度上使用不同的交互式注意力组合。这种组合策略使所提网络能够捕捉图像内部的上下文信息,并增强图像之间的特征关系。此外,设计双特征引导聚合(DFGA),将粗糙的全局语义信息嵌入到更精细的代价体构建中,以进一步增强全局和局部特征的感知。同时,通过设计一种特征度量损失,用于评估变换前后的特征偏差,以减少特征错误匹配对深度估计的影响。实验结果表明,在DTU数据集中,所提网络的完整度和整体度量达到0.264、0.302,在Tanks and temples 2个大场景的重建平均值分别达到64.28、38.03。
文摘Learning-based multi-view stereo(MVS)algorithms have demonstrated great potential for depth estimation in recent years.However,they still struggle to estimate accurate depth in texture-less planar regions,which limits their reconstruction perform-ance in man-made scenes.In this paper,we propose PlaneStereo,a new framework that utilizes planar prior to facilitate the depth estim-ation.Our key intuition is that pixels inside a plane share the same set of plane parameters,which can be estimated collectively using in-formation inside the whole plane.Specifically,our method first segments planes in the reference image,and then fits 3D plane paramet-ers for each segmented plane by solving a linear system using high-confidence depth predictions inside the plane.This allows us to recov-er the plane parameters accurately,which can be converted to accurate depth values for each point in the plane,improving the depth prediction for low-textured local regions.This process is fully differentiable and can be integrated into existing learning-based MVS al-gorithms.Experiments show that using our method consistently improves the performance of existing stereo matching and MVS al-gorithms on DeMoN and ScanNet datasets,achieving state-of-the-art performance.
基金partly supported by JSPS KAKENHI JP15K16027,JP26700013,JP15H05918,JP19H04138,JST CREST JP179423the Foundation for Nara Institute of Science and Technology.
文摘In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo(MVS)techniques.Our contribution is selection of reliable samples of lighting,surface normal,and viewing directions for robustness against estimation errors of MVS.Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments.
基金the National Natural Science Foundation of China(U21A20487)Shenzhen Technology Project(JCYJ20180507182610734)and CAS Key Technology Talent Program.
文摘Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline.
基金supported by the Zhengzhou Collaborative Innovation Major Project under Grant No.20XTZX06013the Henan Provincial Key Scientific Research Project of China under Grant No.22A520042。
文摘Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF(Sparse-Input Generalized Neural Radiance Fields).Firstly,we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images,and then these features are aggregated by multi-head attention as the input of the neural radiance fields.This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference,thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes.We tested the generalization ability on DTU dataset,and our PSNR(peak signal-to-noise ratio)improved by 3.14 compared with the baseline method under the same input conditions.In addition,if the scene has dense input views available,the average PSNR can be improved by 1.04 through further refinement training in a short time,and a higher quality rendering effect can be obtained.
文摘目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征,在每一级特征提取模块中均加入注意力层,以捕获深度推理任务的远程依赖关系;然后通过可微分单应性变换构建参考视锥的特征量,并构建代价体;最后利用多层U-Net体系结构正则化代价体,并通过回归结合参考图像边缘信息生成最终的细化深度图。结果在DTU(Technical University of Denmark)数据集上进行测试,与现有的几种方法相比,本文方法相较于Colmap、Gipuma和Tola方法,整体性指标分别提高8.5%、13.1%和31.9%,完整性指标分别提高20.7%、41.6%和73.3%;相较于Camp、Furu和Surface Net方法,整体性指标分别提高24.8%、33%和29.8%,准确性指标分别提高39.8%、17.6%和1.3%,完整性指标分别提高9.7%、48.4%和58.3%;相较于Pru Mvsnet方法,整体性指标提高1.7%,准确性指标提高5.8%;相较于Mvsnet方法,整体性指标提高1.5%,完整性标提高7%。结论在DTU数据集上的测试结果表明,本文提出的网络架构在整体性指标上得到了目前最优的结果,完整性和准确性指标得到较大提升,3D重建质量更好。
基金the Theme-based Research Scheme,Research Grants Council of Hong Kong(No.T45-205/21-N).
文摘Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseline RGB references is a challenging problem that lacks efficient solutions with existing novel view synthesis techniques.In this work,we aim at truthfully rendering local immersive novel views/LF images based on large baseline LF captures and a single RGB image in the target view.To fully explore the precious information from source LF captures,we propose a novel occlusion-aware source sampler(OSS)module which efficiently transfers the pixels of source views to the target view′s frustum in an occlusion-aware manner.An attention-based deep visual fusion module is proposed to fuse the revealed occluded background content with a preliminary LF into a final refined LF.The proposed source sampling and fusion mechanism not only helps to provide information for occluded regions from varying observation angles,but also proves to be able to effectively enhance the visual rendering quality.Experimental results show that our proposed method is able to render high-quality LF images/novel views with sparse RGB references and outperforms state-of-the-art LF rendering and novel view synthesis methods.