期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
A detail preserving neural network model for Monte Carlo denoising 被引量:7
1
作者 Weiheng Lin Beibei Wang +1 位作者 Lu Wang Nicolas Holzschuch 《Computational Visual Media》 CSCD 2020年第2期157-168,共12页
Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel,resulting in long rendering time. To reduce the cost,one solution is Monte ... Monte Carlo based methods such as path tracing are widely used in movie production. To achieve low noise, they require many samples per pixel,resulting in long rendering time. To reduce the cost,one solution is Monte Carlo denoising, which renders the image with fewer samples per pixel(as little as128) and then denoises the resulting image. Many Monte Carlo denoising methods rely on deep learning:they use convolutional neural networks to learn the relationship between noisy images and reference images,using auxiliary features such as position and normal together with image color as inputs. The network predicts kernels which are then applied to the noisy input. These methods show powerful denoising ability,but tend to lose geometric or lighting details and to blur sharp features during denoising.In this paper, we solve this issue by proposing a novel network structure, a new input feature—light transport covariance from path space—and an improved loss function. Our network separates feature buffers from the color buffer to enhance detail effects. The features are extracted separately and then integrated into a shallow kernel predictor. Our loss function considers perceptual loss, which also improves detail preservation.In addition, we use a light transport covariance feature in path space as one of the features, which helps to preserve illumination details. Our method denoises Monte Carlo path traced images while preserving details much better than previous methods. 展开更多
关键词 deep learning light transport covariance perceptual loss Monte Carlo denoising
原文传递
AOGAN:A generative adversarial network for screen space ambient occlusion 被引量:2
2
作者 Lei Ren Ying Song 《Computational Visual Media》 SCIE EI CSCD 2022年第3期483-494,共12页
Ambient occlusion(AO)is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces.Recently,a number of learning-based AO approaches have been proposed,which bring a new angl... Ambient occlusion(AO)is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces.Recently,a number of learning-based AO approaches have been proposed,which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed.However,most such methods have high error for complex scenes or tend to ignore details.We propose an end-to-end generative adversarial network for the production of realistic AO,and explore the importance of perceptual loss in the generative model to AO accuracy.An attention mechanism is also described to improve the accuracy of details,whose effectiveness is demonstrated on a wide variety of scenes. 展开更多
关键词 ambient occlusion(AO) attention mechanism generative adversarial network(GAN) perceptual loss
原文传递
Unpaired image to image transformation via informative coupled generative adversarial networks
3
作者 Hongwei GE Yuxuan HAN +1 位作者 Wenjing KANG Liang SUN 《Frontiers of Computer Science》 SCIE EI CSCD 2021年第4期83-92,共10页
We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source ima... We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source images,and to make the details of target being as distinguishable as possible.To solve this problem,we propose an informative coupled generative adversarial networks(ICoGAN).For each domain,an adversarial generator-and-discriminator network is constructed.Basically,we make an approximately-shared latent space assumption by a mutual information mechanism,which enables the algorithm to learn representations of both domains in unsupervised setting,and to transform the key properties of images from source to target.Moreover,to further enhance the performance,a weightsharing constraint between two subnetworks,and different level perceptual losses extracted from the intermediate layers of the networks are combined.With quantitative and visual results presented on the tasks of edge to photo transformation,face attribute transfer,and image inpainting,we demonstrate the ICo-GAN’s effectiveness,as compared with other state-of-the-art algorithms. 展开更多
关键词 generative adversarial networks image transformation mutual information perceptual loss
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部