期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Deep unfolding multi-scale regularizer network for image denoising 被引量:1
1
作者 Jingzhao Xu Mengke Yuan +1 位作者 Dong-Ming Yan tieru wu 《Computational Visual Media》 SCIE EI CSCD 2023年第2期335-350,共16页
Existing deep unfolding methods unroll an optimization algorithm with a fixed number of steps,and utilize convolutional neural networks(CNNs)to learn data-driven priors.However,their performance is limited for two mai... Existing deep unfolding methods unroll an optimization algorithm with a fixed number of steps,and utilize convolutional neural networks(CNNs)to learn data-driven priors.However,their performance is limited for two main reasons.Firstly,priors learned in deep feature space need to be converted to the image space at each iteration step,which limits the depth of CNNs and prevents CNNs from exploiting contextual information.Secondly,existing methods only learn deep priors at the single full-resolution scale,so ignore the benefits of multi-scale context in dealing with high level noise.To address these issues,we explicitly consider the image denoising process in the deep feature space and propose the deep unfolding multi-scale regularizer network(DUMRN)for image denoising.The core of DUMRN is the feature-based denoising module(FDM)that directly removes noise in the deep feature space.In each FDM,we construct a multi-scale regularizer block to learn deep prior information from multi-resolution features.We build the DUMRN by stacking a sequence of FDMs and train it in an end-to-end manner.Experimental results on synthetic and real-world benchmarks demonstrate that DUMRN performs favorably compared to state-of-theart methods. 展开更多
关键词 image denoising deep unfolding network multi-scale regularizer deep learning
原文传递
Towards harmonized regional style transfer and manipulation for facial images
2
作者 Cong Wang Fan Tang +2 位作者 Yong Zhang tieru wu Weiming Dong 《Computational Visual Media》 SCIE EI CSCD 2023年第2期351-366,共16页
Regional facial image synthesis conditioned on a semantic mask has achieved great attention in the field of computational visual media.However,the appearances of different regions may be inconsistent with each other a... Regional facial image synthesis conditioned on a semantic mask has achieved great attention in the field of computational visual media.However,the appearances of different regions may be inconsistent with each other after performing regional editing.In this paper,we focus on harmonized regional style transfer for facial images.A multi-scale encoder is proposed for accurate style code extraction.The key part of our work is a multi-region style attention module.It adapts multiple regional style embeddings from a reference image to a target image,to generate a harmonious result.We also propose style mapping networks for multi-modal style synthesis.We further employ an invertible flow model which can serve as mapping network to fine-tune the style code by inverting the code to latent space.Experiments on three widely used face datasets were used to evaluate our model by transferring regional facial appearance between datasets.The results show that our model can reliably perform style transfer and multimodal manipulation,generating output comparable to the state of the art. 展开更多
关键词 face manipulation style transfer generative models facial harmonization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部