The challenging task of handwriting style synthesis requires capturing the individuality and diversity of human handwriting.The majority of currently available methods use either a generative adversarial network(GAN)o...The challenging task of handwriting style synthesis requires capturing the individuality and diversity of human handwriting.The majority of currently available methods use either a generative adversarial network(GAN)or a recurrent neural network(RNN)to generate new handwriting styles.This is why these techniques frequently fall short of producing diverse and realistic text pictures,particularly for terms that are not commonly used.To resolve that,this research proposes a novel deep learning model that consists of a style encoder and a text generator to synthesize different handwriting styles.This network excels in generating conditional text by extracting style vectors from a series of style images.The model performs admirably on a range of handwriting synthesis tasks,including the production of text that is out-of-vocabulary.It works more effectively than previous approaches by displaying lower values on key Generative Adversarial Network evaluation metrics,such Geometric Score(GS)(3.21×10^(-5))and Fréchet Inception Distance(FID)(8.75),as well as text recognition metrics,like Character Error Rate(CER)and Word Error Rate(WER).A thorough component analysis revealed the steady improvement in image production quality,highlighting the importance of specific handwriting styles.Applicable fields include digital forensics,creative writing,and document security.展开更多
The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean port...The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the“Gat”(a traditional Korean hat)are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the“Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation.展开更多
In recent years,deep generative models have been successfully applied to perform artistic painting style transfer(APST).The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of ...In recent years,deep generative models have been successfully applied to perform artistic painting style transfer(APST).The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of model convergence caused by the irreversible en-decoder methodology of the existing models.Aiming to this,this paper proposes a Flow-based architecture with both the en-decoder sharing a reversible network configuration.The proposed APST-Flow can efficiently reduce model uncertainty via a compact analysis-synthesis methodology,thereby the generalization performance and the convergence stability are improved.For the generator,a Flow-based network using Wavelet additive coupling(WAC)layers is implemented to extract multi-scale content features.Also,a style checker is used to enhance the global style consistency by minimizing the error between the reconstructed and the input images.To enhance the generated salient details,a loss of adaptive stroke edge is applied in both the global and local model training.The experimental results show that the proposed method improves PSNR by 5%,SSIM by 6.2%,and decreases Style Error by 29.4%over the existing models on the ChipPhi set.The competitive results verify that APST-Flow achieves high-quality generation with less content deviation and enhanced generalization,thereby can be further applied to more APST scenes.展开更多
生成对抗网络常常被用于图像着色、语义合成、风格迁移等图像转换任务,但现阶段图像生成模型的训练往往依赖于大量配对的数据集,且只能实现两个图像域之间的转换。针对以上问题,提出了一种基于生成对抗网络的时尚内容和风格迁移模型(con...生成对抗网络常常被用于图像着色、语义合成、风格迁移等图像转换任务,但现阶段图像生成模型的训练往往依赖于大量配对的数据集,且只能实现两个图像域之间的转换。针对以上问题,提出了一种基于生成对抗网络的时尚内容和风格迁移模型(content and style transfer based on generative adversarial network,CS-GAN)。该模型利用对比学习框架最大化时尚单品与生成图像之间的互信息,可保证在时尚单品结构不变的前提下实现内容迁移;通过层一致性动态卷积方法,针对不同风格图像自适应地学习风格特征,实现时尚单品任意风格迁移,对输入的时尚单品进行内容特征(如颜色、纹理)和风格特征(如莫奈风、立体派)的融合,实现多个图像域的转换。在公开的时尚数据集上进行对比实验和结果分析,该方法与其他主流方法相比,在图像合成质量、Inception score和FID距离评价指标上均有所提升。展开更多
The technology for image-to-image style transfer(a prevalent image processing task)has developed rapidly.The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target...The technology for image-to-image style transfer(a prevalent image processing task)has developed rapidly.The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network.However,the existing methods typically have a large computational cost.To achieve efficient style transfer,we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations.Then we utilize an attention mechanism to transform images with various styles.We optimize the original generative adversarial network(GAN)by using more efficient calculation methods for image-to-illustration translation.The experimental results show that our proposed method is similar to human vision and still maintains the quality of the image.Moreover,our proposed method overcomes the high computational cost and high computational resource consumption for style transfer.By comparing the results of subjective and objective evaluation indicators,our proposed method has shown superior performance over existing methods.展开更多
基金supported by the National Research Foundation of Korea(NRF)Grant funded by the Korean government(MSIT)(NRF-2023R1A2C1005950).
文摘The challenging task of handwriting style synthesis requires capturing the individuality and diversity of human handwriting.The majority of currently available methods use either a generative adversarial network(GAN)or a recurrent neural network(RNN)to generate new handwriting styles.This is why these techniques frequently fall short of producing diverse and realistic text pictures,particularly for terms that are not commonly used.To resolve that,this research proposes a novel deep learning model that consists of a style encoder and a text generator to synthesize different handwriting styles.This network excels in generating conditional text by extracting style vectors from a series of style images.The model performs admirably on a range of handwriting synthesis tasks,including the production of text that is out-of-vocabulary.It works more effectively than previous approaches by displaying lower values on key Generative Adversarial Network evaluation metrics,such Geometric Score(GS)(3.21×10^(-5))and Fréchet Inception Distance(FID)(8.75),as well as text recognition metrics,like Character Error Rate(CER)and Word Error Rate(WER).A thorough component analysis revealed the steady improvement in image production quality,highlighting the importance of specific handwriting styles.Applicable fields include digital forensics,creative writing,and document security.
基金supported by Metaverse Lab Program funded by the Ministry of Science and ICT(MSIT),and the Korea Radio Promotion Association(RAPA).
文摘The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the“Gat”(a traditional Korean hat)are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the“Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation.
基金support from National Natural Science Foundation of China(62062048).
文摘In recent years,deep generative models have been successfully applied to perform artistic painting style transfer(APST).The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of model convergence caused by the irreversible en-decoder methodology of the existing models.Aiming to this,this paper proposes a Flow-based architecture with both the en-decoder sharing a reversible network configuration.The proposed APST-Flow can efficiently reduce model uncertainty via a compact analysis-synthesis methodology,thereby the generalization performance and the convergence stability are improved.For the generator,a Flow-based network using Wavelet additive coupling(WAC)layers is implemented to extract multi-scale content features.Also,a style checker is used to enhance the global style consistency by minimizing the error between the reconstructed and the input images.To enhance the generated salient details,a loss of adaptive stroke edge is applied in both the global and local model training.The experimental results show that the proposed method improves PSNR by 5%,SSIM by 6.2%,and decreases Style Error by 29.4%over the existing models on the ChipPhi set.The competitive results verify that APST-Flow achieves high-quality generation with less content deviation and enhanced generalization,thereby can be further applied to more APST scenes.
文摘生成对抗网络常常被用于图像着色、语义合成、风格迁移等图像转换任务,但现阶段图像生成模型的训练往往依赖于大量配对的数据集,且只能实现两个图像域之间的转换。针对以上问题,提出了一种基于生成对抗网络的时尚内容和风格迁移模型(content and style transfer based on generative adversarial network,CS-GAN)。该模型利用对比学习框架最大化时尚单品与生成图像之间的互信息,可保证在时尚单品结构不变的前提下实现内容迁移;通过层一致性动态卷积方法,针对不同风格图像自适应地学习风格特征,实现时尚单品任意风格迁移,对输入的时尚单品进行内容特征(如颜色、纹理)和风格特征(如莫奈风、立体派)的融合,实现多个图像域的转换。在公开的时尚数据集上进行对比实验和结果分析,该方法与其他主流方法相比,在图像合成质量、Inception score和FID距离评价指标上均有所提升。
基金This work was funded by the China Postdoctoral Science Foundation(No.2019M661319)Heilongjiang Postdoctoral Scientific Research Developmental Foundation(No.LBH-Q17042)+1 种基金Fundamental Research Funds for the Central Universities(3072020CFQ0602,3072020CF0604,3072020CFP0601)2019 Industrial Internet Innovation and Development Engineering(KY1060020002,KY10600200008).
文摘The technology for image-to-image style transfer(a prevalent image processing task)has developed rapidly.The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network.However,the existing methods typically have a large computational cost.To achieve efficient style transfer,we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations.Then we utilize an attention mechanism to transform images with various styles.We optimize the original generative adversarial network(GAN)by using more efficient calculation methods for image-to-illustration translation.The experimental results show that our proposed method is similar to human vision and still maintains the quality of the image.Moreover,our proposed method overcomes the high computational cost and high computational resource consumption for style transfer.By comparing the results of subjective and objective evaluation indicators,our proposed method has shown superior performance over existing methods.