摘要
We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of many object-centric scenes. Inspired by SinGAN, we also learn the internal distribution of the input scene, which necessitates our key designs w.r.t. the scene representation and network architecture. Unlike popular multi-layer perceptrons (MLP)-based architectures, we particularly employ convolutional generators and discriminators, which inherently possess spatial locality bias, to operate over voxelized volumes for learning the internal distribution over a plethora of overlapping regions. On the other hand, localizing the adversarial generators and discriminators over confined areas with limited receptive fields easily leads to highly implausible geometric structures in the spatial. Our remedy is to use spatial inductive bias and joint discrimination on geometric clues in the form of 2D depth maps. This strategy is effective in improving spatial arrangement while incurring negligible additional computational cost. Experimental results demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene models, and the versatility of SinGRAV by its use in a variety of applications. Code and data will be released to facilitate further research.
作者
王玉洁
陈学霖
陈宝权
Yu-Jie Wang;Xue-Lin Chen;Bao-Quan Chen(School of Computer Science and Technology,Shandong University,Qingdao 266237,China;State Key Laboratory of General Artificial Intelligence,Beijing 100871,China;School of Intelligence Science and Technology,Peking University,Beijing 100871,China;Tencent AI Lab,Tencent Holdings Limited,Shenzhen 518057,China)
基金
supported by the International(Regional)Cooperation and Exchange Program of National Natural Science Foundation of China under Grant No.62161146002
the Shenzhen Collaborative Innovation Program under Grant No.CJGJZD2021048092601003.