Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel,resulting in long rendering time. To reduce the cost,one solution is Monte ...Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel,resulting in long rendering time. To reduce the cost,one solution is Monte Carlo denoising, which renders the image with fewer samples per pixel(as little as128) and then denoises the resulting image. Many Monte Carlo denoising methods rely on deep learning:they use convolutional neural networks to learn the relationship between noisy images and reference images,using auxiliary features such as position and normal together with image color as inputs. The network predicts kernels which are then applied to the noisy input. These methods show powerful denoising ability,but tend to lose geometric or lighting details and to blur sharp features during denoising.In this paper, we solve this issue by proposing a novel network structure, a new input feature—light transport covariance from path space—and an improved loss function. Our network separates feature buffers from the color buffer to enhance detail effects. The features are extracted separately and then integrated into a shallow kernel predictor. Our loss function considers perceptual loss, which also improves detail preservation.In addition, we use a light transport covariance feature in path space as one of the features, which helps to preserve illumination details. Our method denoises Monte Carlo path traced images while preserving details much better than previous methods.展开更多
Ambient occlusion(AO)is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces.Recently,a number of learning-based AO approaches have been proposed,which bring a new angl...Ambient occlusion(AO)is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces.Recently,a number of learning-based AO approaches have been proposed,which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed.However,most such methods have high error for complex scenes or tend to ignore details.We propose an end-to-end generative adversarial network for the production of realistic AO,and explore the importance of perceptual loss in the generative model to AO accuracy.An attention mechanism is also described to improve the accuracy of details,whose effectiveness is demonstrated on a wide variety of scenes.展开更多
We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source ima...We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source images,and to make the details of target being as distinguishable as possible.To solve this problem,we propose an informative coupled generative adversarial networks(ICoGAN).For each domain,an adversarial generator-and-discriminator network is constructed.Basically,we make an approximately-shared latent space assumption by a mutual information mechanism,which enables the algorithm to learn representations of both domains in unsupervised setting,and to transform the key properties of images from source to target.Moreover,to further enhance the performance,a weightsharing constraint between two subnetworks,and different level perceptual losses extracted from the intermediate layers of the networks are combined.With quantitative and visual results presented on the tasks of edge to photo transformation,face attribute transfer,and image inpainting,we demonstrate the ICo-GAN’s effectiveness,as compared with other state-of-the-art algorithms.展开更多
文摘Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel,resulting in long rendering time. To reduce the cost,one solution is Monte Carlo denoising, which renders the image with fewer samples per pixel(as little as128) and then denoises the resulting image. Many Monte Carlo denoising methods rely on deep learning:they use convolutional neural networks to learn the relationship between noisy images and reference images,using auxiliary features such as position and normal together with image color as inputs. The network predicts kernels which are then applied to the noisy input. These methods show powerful denoising ability,but tend to lose geometric or lighting details and to blur sharp features during denoising.In this paper, we solve this issue by proposing a novel network structure, a new input feature—light transport covariance from path space—and an improved loss function. Our network separates feature buffers from the color buffer to enhance detail effects. The features are extracted separately and then integrated into a shallow kernel predictor. Our loss function considers perceptual loss, which also improves detail preservation.In addition, we use a light transport covariance feature in path space as one of the features, which helps to preserve illumination details. Our method denoises Monte Carlo path traced images while preserving details much better than previous methods.
基金National Natural Science Foundation of China(No.61602416)Shaoxing Science and Technology Bureau Key Project(No.2020B41006)Opening Fund(No.2020WLB10)of the Key Laboratory of Silk Culture Heritage and Product Design Digital Technology。
文摘Ambient occlusion(AO)is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces.Recently,a number of learning-based AO approaches have been proposed,which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed.However,most such methods have high error for complex scenes or tend to ignore details.We propose an end-to-end generative adversarial network for the production of realistic AO,and explore the importance of perceptual loss in the generative model to AO accuracy.An attention mechanism is also described to improve the accuracy of details,whose effectiveness is demonstrated on a wide variety of scenes.
基金the support of National Key R&D Program of China(2018YFB1600600)the Natural Science Foundation of Liaoning Province(2019MS045)+2 种基金the Open Fund of Key Laboratory of Electronic Equipment Structure Design(Ministry of Education)in Xidian University(EESD1901)the Fundamental Research Funds for the Central Universities(DUT19JC44)the Project of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education in Jilin University(93K172019K10).
文摘We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source images,and to make the details of target being as distinguishable as possible.To solve this problem,we propose an informative coupled generative adversarial networks(ICoGAN).For each domain,an adversarial generator-and-discriminator network is constructed.Basically,we make an approximately-shared latent space assumption by a mutual information mechanism,which enables the algorithm to learn representations of both domains in unsupervised setting,and to transform the key properties of images from source to target.Moreover,to further enhance the performance,a weightsharing constraint between two subnetworks,and different level perceptual losses extracted from the intermediate layers of the networks are combined.With quantitative and visual results presented on the tasks of edge to photo transformation,face attribute transfer,and image inpainting,we demonstrate the ICo-GAN’s effectiveness,as compared with other state-of-the-art algorithms.