期刊文献+

An attention-embedded GAN for SVBRDF recovery from a single image

原文传递
导出
摘要 Learning-based approaches have made substantial progress in capturing spatially-varying bidirectional reflectance distribution functions(SVBRDFs)from a single image with unknown lighting and geometry.However,most existing networks only consider per-pixel losses which limit their capability to recover local features such as smooth glossy regions.A few generative adversarial networks use multiple discriminators for different parameter maps,increasing network complexity.We present a novel end-to-end generative adversarial network(GAN)to recover appearance from a single picture of a nearly-flat surface lit by flash.We use a single unified adversarial framework for each parameter map.An attention module guides the network to focus on details of the maps.Furthermore,the SVBRDF map loss is combined to prevent paying excess attention to specular highlights.We demonstrate and evaluate our method on both public datasets and real data.Quantitative analysis and visual comparisons indicate that our method achieves better results than the state-of-the-art in most cases.
出处 《Computational Visual Media》 SCIE EI CSCD 2023年第3期551-561,共11页 计算可视媒体(英文版)
基金 supported by the National Natural Science Foundation of China(No.61602416) Shaoxing Science and Technology Plan Project(No.2020B41006).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部