摘要
We consider image transformation problems,and the objective is to translate images from a source domain to a target one.The problem is challenging since it is difficult to preserve the key properties of the source images,and to make the details of target being as distinguishable as possible.To solve this problem,we propose an informative coupled generative adversarial networks(ICoGAN).For each domain,an adversarial generator-and-discriminator network is constructed.Basically,we make an approximately-shared latent space assumption by a mutual information mechanism,which enables the algorithm to learn representations of both domains in unsupervised setting,and to transform the key properties of images from source to target.Moreover,to further enhance the performance,a weightsharing constraint between two subnetworks,and different level perceptual losses extracted from the intermediate layers of the networks are combined.With quantitative and visual results presented on the tasks of edge to photo transformation,face attribute transfer,and image inpainting,we demonstrate the ICo-GAN’s effectiveness,as compared with other state-of-the-art algorithms.
基金
the support of National Key R&D Program of China(2018YFB1600600)
the Natural Science Foundation of Liaoning Province(2019MS045)
the Open Fund of Key Laboratory of Electronic Equipment Structure Design(Ministry of Education)in Xidian University(EESD1901)
the Fundamental Research Funds for the Central Universities(DUT19JC44)
the Project of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education in Jilin University(93K172019K10).