摘要
草图生成服装图像被广泛应用在服装设计过程中,可直接展示服装的设计效果,节约大量成本。但在当前,普遍利用条件变量实现服装草图的图形化渲染或通过学习图像与草图的公共属性向量去映射图像等方法,生成的服装图案受草图约束严重,生成的图像深度信息缺失。因此本文针对上述问题提出了一种基于StyleGAN的从草图到服装图像的图像生成方法。首先,利用VGG网络,对多种被变换的草图图像,进行内容特征提取。有针对性的提取目标特征,加深其内容特征,降低其他特征图信息对生成图像的约束;其次,利用特征金字塔的多尺度的特征融合方法,通过小型映射网络生成中间风格向量,利用StyleGAN生成器,丰富图像深度信息。实验结果表明我们的方法在草图生成服装图像方面具有一定的优越性,可以更完整地保留草图的细节,生成图像的质量更佳。
Sketch-generated garment images are widely used in the garment design process, which can directly show the design effect of garments and save a lot of costs. At present, it is common to use conditional variables to achieve graphical rendering of garment sketches or to map images by learning the common attribute vectors of images and sketches, etc. The generated garment patterns are severely constrained by sketches and the depth information of the generated images is missing. Therefore, this paper proposes a StyleGAN-based image generation method from sketches to garment images to address the above problems. First, using the VGG network, a variety of sketch images has been transformed, and the content features are extracted. Targeted extraction of target features deepens their content features and reduces the constraint of other feature map information on the generated images. Second, a multi-scale feature fusion method using a feature pyramid generates intermediate style vectors through a small mapping network and enriches the image depth information using a StyleGAN generator. The experimental results show the superiority of our method in the sketch generation of garment images, which can retain the details of sketches more completely and generate images with better quality.
出处
《计算机科学与应用》
2022年第10期2405-2415,共11页
Computer Science and Application