期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Implementation of Art Pictures Style Conversion with GAN
1
作者 Xinlong Wu Desheng Zheng +3 位作者 Kexin Zhang Yanling Lai Zhifeng Liu Zhihong Zhang 《Journal of Quantum Computing》 2021年第4期127-136,共10页
Image conversion refers to converting an image from one style to another and ensuring that the content of the image remains unchanged.Using Generative Adversarial Networks(GAN)for image conversion can achieve good res... Image conversion refers to converting an image from one style to another and ensuring that the content of the image remains unchanged.Using Generative Adversarial Networks(GAN)for image conversion can achieve good results.However,if there are enough samples,any image in the target domain can be mapped to the same set of inputs.On this basis,the Cycle Consistency Generative Adversarial Network(CycleGAN)was developed.This article verifies and discusses the advantages and disadvantages of the CycleGAN model in image style conversion.CycleGAN uses two generator networks and two discriminator networks.The purpose is to learn the mapping relationship and inverse mapping relationship between the source domain and the target domain.It can reduce the mapping and improve the quality of the generated image.Through the idea of loop,the loss of information in image style conversion is reduced.When evaluating the results of the experiment,the degree of retention of the input image content will be judged.Through the experimental results,CycleGAN can understand the artist’s overall artistic style and successfully convert real landscape paintings.The advantage is that most of the content of the original picture can be retained,and only the texture line of the picture is changed to a level similar to the artist’s style. 展开更多
关键词 Generative adversary network deep learning image style conversion convolutional neural network adversary learning
下载PDF
Reference-guided structure-aware deep sketch colorization for cartoons 被引量:1
2
作者 Xueting Liu Wenliang Wu +2 位作者 Chengze Li Yifan Li Huisi Wu 《Computational Visual Media》 SCIE EI CSCD 2022年第1期135-148,共14页
Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as colo... Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as color guidance,particularly when colorizing related characters or an animation sequence.Reference-guided colorization is more intuitive than colorization with other hints,such as color points or scribbles,or text-based hints.Unfortunately,reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading.In this paper,we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image.Our framework contains a color style extractor to extract the color feature from a color image,a colorization network to generate multi-scale output images by combining a sketch and a color feature,and a multi-scale discriminator to improve the reality of the output image.Extensive qualitative and quantitative evaluations show that our method outperforms existing methods,providing both superior visual quality and style reference consistency in the task of reference-based colorization. 展开更多
关键词 sketch colorization image style editing deep feature understanding reference-based image colorization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部