期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Recent advances in 3D Gaussian splatting
1
作者 Tong Wu Yu-Jie Yuan +4 位作者 Ling-Xiao Zhang Jie Yang yan-pei cao Ling-Qi Yan Lin Gao 《Computational Visual Media》 SCIE EI CSCD 2024年第4期613-642,共30页
The emergence of 3D Gaussian splatting(3DGS)has greatly accelerated rendering in novel view synthesis.Unlike neural implicit representations like neural radiance fields(NeRFs)that represent a 3D scene with position an... The emergence of 3D Gaussian splatting(3DGS)has greatly accelerated rendering in novel view synthesis.Unlike neural implicit representations like neural radiance fields(NeRFs)that represent a 3D scene with position and viewpoint-conditioned neural networks,3D Gaussian splatting utilizes a set of Gaussian ellipsoids to model the scene so that efficient rendering can be accomplished by rasterizing Gaussian ellipsoids into images.Apart from fast rendering,the explicit representation of 3D Gaussian splatting also facilitates downstream tasks like dynamic reconstruction,geometry editing,and physical simulation.Considering the rapid changes and growing number of works in this field,we present a literature review of recent 3D Gaussian splatting methods,which can be roughly classified by functionality into 3D reconstruction,3D editing,and other downstream applications.Traditional point-based rendering methods and the rendering formulation of 3D Gaussian splatting are also covered to aid understanding of this technique.This survey aims to help beginners to quickly get started in this field and to provide experienced researchers with a comprehensive overview,aiming to stimulate future development of the 3D Gaussian splatting representation. 展开更多
关键词 3D Gaussian splatting(3DGS) radiance field novel view synthesis 3D editing scene generation
原文传递
HDR-Net-Fusion:Real-time 3D dynamic scene reconstruction with a hierarchical deep reinforcement network 被引量:1
2
作者 Hao-Xuan Song Jiahui Huang +1 位作者 yan-pei cao Tai-Jiang Mu 《Computational Visual Media》 EI CSCD 2021年第4期419-435,共17页
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de... Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods. 展开更多
关键词 dynamic 3D scene reconstruction deep reinforcement learning point cloud completion deep neural networks
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部