期刊文献+

Towards harmonized regional style transfer and manipulation for facial images 被引量:1

原文传递
导出
摘要 Regional facial image synthesis conditioned on a semantic mask has achieved great attention in the field of computational visual media.However,the appearances of different regions may be inconsistent with each other after performing regional editing.In this paper,we focus on harmonized regional style transfer for facial images.A multi-scale encoder is proposed for accurate style code extraction.The key part of our work is a multi-region style attention module.It adapts multiple regional style embeddings from a reference image to a target image,to generate a harmonious result.We also propose style mapping networks for multi-modal style synthesis.We further employ an invertible flow model which can serve as mapping network to fine-tune the style code by inverting the code to latent space.Experiments on three widely used face datasets were used to evaluate our model by transferring regional facial appearance between datasets.The results show that our model can reliably perform style transfer and multimodal manipulation,generating output comparable to the state of the art.
出处 《Computational Visual Media》 SCIE EI CSCD 2023年第2期351-366,共16页 计算可视媒体(英文版)
基金 partly supported by the National Key R&D Program of China(No.2020YFA0714100) the National Natural Science Foundation of China(Nos.61872162,62102162,61832016,U20B2070).
  • 相关文献

同被引文献2

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部