摘要
近年来,越来越多的生成对抗网络出现在深度学习的各个领域中。条件生成对抗网络(Conditional Generative Adver-sarial Networks,cGAN)开创性地将监督学习引入到无监督的GAN网络中,这使得GAN可以生成有标签数据。传统的GAN通过多次卷积运算来模拟不同区域之间的相关性,进而生成图像,而cGAN只是对GAN的目标函数加以改进,并没有改变其网络结构,因此cGAN生成的图像中仍然存在长距离特征之间相关性相对较小的问题,从而导致cGAN生成图像的细节不清楚。为了解决这个问题,将自注意力机制引入cGAN中,并提出了一个新的模型SA-cGAN。该模型通过将图像中相距较远的特征相互关联起来生成一致的对象或场景,进而提升生成对抗网络生成细节的能力。将SA-cGAN在CelebA和MNIST手写数据集上进行了实验,并将其与DCGAN,cGAN等几种常用的生成模型进行了比较,结果证明该模型相比其他几种模型在图像生成领域有一定的进步。
In recent years,more and more generative adversarial networks appear in various fields of deep learning.Conditional generative adversarial networks(cGAN)are the first to introduce supervised learning into unsupervised GANs,which makes it possible for adversarial networks to generate labeled data.Traditional GAN generates images through multiple convolution operations to simulate the dependency among different regions.However,cGAN only improves the objective function of GAN,but does not change its network structure.Therefore,the problem also exists in cGAN that when the distance between features in the gene-rated image is long,features have relatively less relationship,resulting in unclear details of the generated image.In order to solve this problem,this paper introduces Self-attention mechanism to cGAN and proposes a new model named SA-cGAN.The model generates consistent objects or scenes by using features in the long distance of the image,so that the generative ability of conditional GAN is improved.SA-cGAN is experimented on the CelebA and MNIST handwritten datasets and compared with several commonly used generative models such as DCGAN,cGAN.Results prove that the proposed model has made some progress in the field of image generation.
作者
于文家
丁世飞
YU Wen-jia;DING Shi-fei(School of Computer Science and Technology,China University of Mining and Technology,Xuzhou,Jiangsu 221116,China)
出处
《计算机科学》
CSCD
北大核心
2021年第1期241-246,共6页
Computer Science
基金
国家自然科学基金(61672522,61976216)。