摘要
针对人脸照片和人脸素描间的图像翻译问题,本文基于对偶生成对抗网络模型,对其目标函数附加两个损失函数建立新的网络模型.通过参数优化实验不断优化本文提出的模型,从而找到最优参数;通过直观和量化对比实验表明本文提出的模型在人脸数据上的图像翻译效果无论在清晰度还是在保持面部特征方面是目前基于生成对抗网络的图像翻译模型中表现最优的,并对相关GAN模型的稳定性进行了对比;最后通过效果分析实验说明了所附加的损失函数的具体作用.
With regard to the problem of image translation between face photo and face sketches,a new network model was established by adding two loss functions to the objective function of the DualGAN. Through optimization experiments of the parameters,the proposed model was continuously optimized to find the optimal parameters. The qualitative and quantitative comparison experiments show that the proposed model has excellent translation performance in face data in terms of sharpness and facial features,and it is now the best among the related GAN network models. The stability of related GAN models was then compared.Finally,the effect analysis experiment clarified the specific function of the additional loss functions.
作者
吴华明
刘茜瑞
王耀宏
Wu Huaming;Liu Qianrui;Wang Yaohong(School of Mathematics,Tianjin University,Tianjin 300072,China)
出处
《天津大学学报(自然科学与工程技术版)》
EI
CSCD
北大核心
2019年第3期306-314,共9页
Journal of Tianjin University:Science and Technology
基金
国家自然科学基金资助项目(11601381)~~
关键词
生成对抗网络
人脸数据
图像翻译
损失函数
generative adversarial networks
face data
image translation
loss functions