期刊文献+
共找到446篇文章
< 1 2 23 >
每页显示 20 50 100
Robust Information Hiding Based on Neural Style Transfer with Artificial Intelligence
1
作者 Xiong Zhang Minqing Zhang +3 位作者 Xu AnWang Wen Jiang Chao Jiang Pan Yang 《Computers, Materials & Continua》 SCIE EI 2024年第5期1925-1938,共14页
This paper proposes an artificial intelligence-based robust information hiding algorithm to address the issue of confidential information being susceptible to noise attacks during transmission.The algorithm we designe... This paper proposes an artificial intelligence-based robust information hiding algorithm to address the issue of confidential information being susceptible to noise attacks during transmission.The algorithm we designed aims to mitigate the impact of various noise attacks on the integrity of secret information during transmission.The method we propose involves encoding secret images into stylized encrypted images and applies adversarial transfer to both the style and content features of the original and embedded data.This process effectively enhances the concealment and imperceptibility of confidential information,thereby improving the security of such information during transmission and reducing security risks.Furthermore,we have designed a specialized attack layer to simulate real-world attacks and common noise scenarios encountered in practical environments.Through adversarial training,the algorithm is strengthened to enhance its resilience against attacks and overall robustness,ensuring better protection against potential threats.Experimental results demonstrate that our proposed algorithm successfully enhances the concealment and unknowability of secret information while maintaining embedding capacity.Additionally,it ensures the quality and fidelity of the stego image.The method we propose not only improves the security and robustness of information hiding technology but also holds practical application value in protecting sensitive data and ensuring the invisibility of confidential information. 展开更多
关键词 Information hiding neural style transfer ROBUSTNESS
下载PDF
PP-GAN:Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN
2
作者 Jongwook Si Sungyoung Kim 《Computers, Materials & Continua》 SCIE EI 2023年第12期3119-3138,共20页
The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean port... The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the“Gat”(a traditional Korean hat)are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the“Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation. 展开更多
关键词 style transfer style synthesis generative adversarial network(GAN) landmark extractor ID photos Korean portrait
下载PDF
APST-Flow: A Reversible Network-Based Artistic Painting Style Transfer Method
3
作者 Meng Wang Yixuan Shao Haipeng Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5229-5254,共26页
In recent years,deep generative models have been successfully applied to perform artistic painting style transfer(APST).The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of ... In recent years,deep generative models have been successfully applied to perform artistic painting style transfer(APST).The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of model convergence caused by the irreversible en-decoder methodology of the existing models.Aiming to this,this paper proposes a Flow-based architecture with both the en-decoder sharing a reversible network configuration.The proposed APST-Flow can efficiently reduce model uncertainty via a compact analysis-synthesis methodology,thereby the generalization performance and the convergence stability are improved.For the generator,a Flow-based network using Wavelet additive coupling(WAC)layers is implemented to extract multi-scale content features.Also,a style checker is used to enhance the global style consistency by minimizing the error between the reconstructed and the input images.To enhance the generated salient details,a loss of adaptive stroke edge is applied in both the global and local model training.The experimental results show that the proposed method improves PSNR by 5%,SSIM by 6.2%,and decreases Style Error by 29.4%over the existing models on the ChipPhi set.The competitive results verify that APST-Flow achieves high-quality generation with less content deviation and enhanced generalization,thereby can be further applied to more APST scenes. 展开更多
关键词 Artistic painting style transfer reversible network generative adversarial network wavelet transform
下载PDF
Data Augmentation Technology Driven By Image Style Transfer in Self-Driving Car Based on End-to-End Learning 被引量:3
4
作者 Dongjie Liu Jin Zhao +4 位作者 Axin Xi Chao Wang Xinnian Huang Kuncheng Lai Chang Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第2期593-617,共25页
With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while ... With the advent of deep learning,self-driving schemes based on deep learning are becoming more and more popular.Robust perception-action models should learn from data with different scenarios and real behaviors,while current end-to-end model learning is generally limited to training of massive data,innovation of deep network architecture,and learning in-situ model in a simulation environment.Therefore,we introduce a new image style transfer method into data augmentation,and improve the diversity of limited data by changing the texture,contrast ratio and color of the image,and then it is extended to the scenarios that the model has been unobserved before.Inspired by rapid style transfer and artistic style neural algorithms,we propose an arbitrary style generation network architecture,including style transfer network,style learning network,style loss network and multivariate Gaussian distribution function.The style embedding vector is randomly sampled from the multivariate Gaussian distribution and linearly interpolated with the embedded vector predicted by the input image on the style learning network,which provides a set of normalization constants for the style transfer network,and finally realizes the diversity of the image style.In order to verify the effectiveness of the method,image classification and simulation experiments were performed separately.Finally,we built a small-sized smart car experiment platform,and apply the data augmentation technology based on image style transfer drive to the experiment of automatic driving for the first time.The experimental results show that:(1)The proposed scheme can improve the prediction accuracy of the end-to-end model and reduce the model’s error accumulation;(2)the method based on image style transfer provides a new scheme for data augmentation technology,and also provides a solution for the high cost that many deep models rely heavily on a large number of label data. 展开更多
关键词 Deep learning SELF-DRIVING end-to-end learning style transfer data augmentation.
下载PDF
Mesh generation and optimization from digital rock fractures based on neural style transfer
5
作者 Mengsu Hu Jonny Rutqvist Carl I.Steefel 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2021年第4期912-919,共8页
The complex geometric features of subsurface fractures at different scales makes mesh generation challenging and/or expensive.In this paper,we make use of neural style transfer(NST),a machine learning technique,to gen... The complex geometric features of subsurface fractures at different scales makes mesh generation challenging and/or expensive.In this paper,we make use of neural style transfer(NST),a machine learning technique,to generate mesh from rock fracture images.In this new approach,we use digital rock fractures at multiple scales that represent’content’and define uniformly shaped and sized triangles to represent’style’.The 19-layer convolutional neural network(CNN)learns the content from the rock image,including lower-level features(such as edges and corners)and higher-level features(such as rock,fractures,or other mineral fillings),and learns the style from the triangular grids.By optimizing the cost function to achieve approximation to represent both the content and the style,numerical meshes can be generated and optimized.We utilize the NST to generate meshes for rough fractures with asperities formed in rock,a network of fractures embedded in rock,and a sand aggregate with multiple grains.Based on the examples,we show that this new NST technique can make mesh generation and optimization much more efficient by achieving a good balance between the density of the mesh and the presentation of the geometric features.Finally,we discuss future applications of this approach and perspectives of applying machine learning to bridge the gaps between numerical modeling and experiments. 展开更多
关键词 Convolutional neural network(CNN) Neural style transfer(NST) Digital rock Discrete fractures Discontinuum asperities Grain aggregates Mesh generation and optimization
下载PDF
Image-to-Image Style Transfer Based on the Ghost Module
6
作者 Yan Jiang Xinrui Jia +3 位作者 Liguo Zhang Ye Yuan Lei Chen Guisheng Yin 《Computers, Materials & Continua》 SCIE EI 2021年第9期4051-4067,共17页
The technology for image-to-image style transfer(a prevalent image processing task)has developed rapidly.The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target... The technology for image-to-image style transfer(a prevalent image processing task)has developed rapidly.The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network.However,the existing methods typically have a large computational cost.To achieve efficient style transfer,we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations.Then we utilize an attention mechanism to transform images with various styles.We optimize the original generative adversarial network(GAN)by using more efficient calculation methods for image-to-illustration translation.The experimental results show that our proposed method is similar to human vision and still maintains the quality of the image.Moreover,our proposed method overcomes the high computational cost and high computational resource consumption for style transfer.By comparing the results of subjective and objective evaluation indicators,our proposed method has shown superior performance over existing methods. 展开更多
关键词 style transfer generative adversarial networks ghost module attention mechanism human visual habits
下载PDF
Enhancing the Robustness of Visual Object Tracking via Style Transfer
7
作者 Abdollah Amirkhani Amir Hossein Barshooi Amir Ebrahimi 《Computers, Materials & Continua》 SCIE EI 2022年第1期981-997,共17页
The performance and accuracy of computer vision systems are affected by noise in different forms.Although numerous solutions and algorithms have been presented for dealing with every type of noise,a comprehensive tech... The performance and accuracy of computer vision systems are affected by noise in different forms.Although numerous solutions and algorithms have been presented for dealing with every type of noise,a comprehensive technique that can cover all the diverse noises and mitigate their damaging effects on the performance and precision of various systems is still missing.In this paper,we have focused on the stability and robustness of one computer vision branch(i.e.,visual object tracking).We have demonstrated that,without imposing a heavy computational load on a model or changing its algorithms,the drop in the performance and accuracy of a system when it is exposed to an unseen noise-laden test dataset can be prevented by simply applying the style transfer technique on the train dataset and training the model with a combination of these and the original untrained data.To verify our proposed approach,it is applied on a generic object tracker by using regression networks.This method’s validity is confirmed by testing it on an exclusive benchmark comprising 50 image sequences,with each sequence containing 15 types of noise at five different intensity levels.The OPE curves obtained show a 40%increase in the robustness of the proposed object tracker against noise,compared to the other trackers considered. 展开更多
关键词 style transfer visual object tracking ROBUSTNESS CORRUPTION
下载PDF
Research on Image Generation and Style Transfer Algorithm Based on Deep Learning 被引量:1
8
作者 Ruikun Wang 《Open Journal of Applied Sciences》 2019年第8期661-672,共12页
Aiming at the current process of artistic creation and animation creation, there are a lot of repeated manual operations in the process of conversion from sketch to the stylized image. This paper presented a solution ... Aiming at the current process of artistic creation and animation creation, there are a lot of repeated manual operations in the process of conversion from sketch to the stylized image. This paper presented a solution based on a deep learning framework to realize image generation and style transfer. The method first used the conditional generation to resist the network, optimizes the loss function of the training mapping relationship, and generated the actual image from the input sketch. Then, by defining and optimizing the perceptual loss function of the style transfer model, the style features are extracted from the image, thereby forming the actual The conversion between images and stylized art images. Experiments show that this method can greatly reduce the work of coloring and converting with different artistic effects, and achieve the purpose of transforming simple stick figures into actual object images. 展开更多
关键词 DEEP LEARNING IMAGE GENERATION style transfer
下载PDF
Towards harmonized regional style transfer and manipulation for facial images 被引量:1
9
作者 Cong Wang Fan Tang +2 位作者 Yong Zhang Tieru Wu Weiming Dong 《Computational Visual Media》 SCIE EI CSCD 2023年第2期351-366,共16页
Regional facial image synthesis conditioned on a semantic mask has achieved great attention in the field of computational visual media.However,the appearances of different regions may be inconsistent with each other a... Regional facial image synthesis conditioned on a semantic mask has achieved great attention in the field of computational visual media.However,the appearances of different regions may be inconsistent with each other after performing regional editing.In this paper,we focus on harmonized regional style transfer for facial images.A multi-scale encoder is proposed for accurate style code extraction.The key part of our work is a multi-region style attention module.It adapts multiple regional style embeddings from a reference image to a target image,to generate a harmonious result.We also propose style mapping networks for multi-modal style synthesis.We further employ an invertible flow model which can serve as mapping network to fine-tune the style code by inverting the code to latent space.Experiments on three widely used face datasets were used to evaluate our model by transferring regional facial appearance between datasets.The results show that our model can reliably perform style transfer and multimodal manipulation,generating output comparable to the state of the art. 展开更多
关键词 face manipulation style transfer generative models facial harmonization
原文传递
Aesthetic style transferring method based on deep neural network between Chinese landscape painting and classical private garden's virtual scenario
10
作者 Shuai Hong Jie Shen +4 位作者 Guonian Lü Xiaoyan Liu Yirui Mao Nina Sun Long Tang 《International Journal of Digital Earth》 SCIE EI 2023年第1期1491-1509,共19页
Most of the existing virtual scenarios built for the digital protection of Chinese classical private gardens are too modern in expression style to show the aesthetic significance of their historical period.Considering... Most of the existing virtual scenarios built for the digital protection of Chinese classical private gardens are too modern in expression style to show the aesthetic significance of their historical period.Considering the aesthetic commonality between traditional Chinese landscape paintings and classical private gardens and referring to image style transfer,here,a deep neural network was proposed to transfer the aesthetic style from landscape paintings to the virtual scenario of classical private gardens.The network consisted of two parts:style prediction and style transfer.The style prediction network was used to obtain style representation from style paintings,and the style transfer network was used to transfer style representation to the content scenario.The pre-trained network was then embedded into the scenario rendering pipeline and combined with the screen post-processing method to realise the stylised expression of the virtual scenario.To verify the feasibility of this methodology,a virtual scenario of the Humble Administrator’s Garden was used as the content scenario andfive garden landscape paintings from different time periods and painting styles were selected for the case study.The results demonstrated that this methodology could effectively achieve the aesthetic style transfer of a virtual scenario. 展开更多
关键词 Chinese classical private garden virtual scenario Chinese traditional landscape painting deep neural network aesthetic style transfer
原文传递
Emotional Vietnamese Speech Synthesis Using Style-Transfer Learning
11
作者 Thanh X.Le An T.Le Quang H.Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1263-1278,共16页
In recent years,speech synthesis systems have allowed for the produc-tion of very high-quality voices.Therefore,research in this domain is now turning to the problem of integrating emotions into speech.However,the met... In recent years,speech synthesis systems have allowed for the produc-tion of very high-quality voices.Therefore,research in this domain is now turning to the problem of integrating emotions into speech.However,the method of con-structing a speech synthesizer for each emotion has some limitations.First,this method often requires an emotional-speech data set with many sentences.Such data sets are very time-intensive and labor-intensive to complete.Second,training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning.In addition,each model for each emotion failed to take advantage of data sets of other emotions.In this paper,we propose a new method to synthesize emotional speech in which the latent expressions of emotions are learned from a small data set of professional actors through a Flow-tron model.In addition,we provide a new method to build a speech corpus that is scalable and whose quality is easy to control.Next,to produce a high-quality speech synthesis model,we used this data set to train the Tacotron 2 model.We used it as a pre-trained model to train the Flowtron model.We applied this method to synthesize Vietnamese speech with sadness and happiness.Mean opi-nion score(MOS)assessment results show that MOS is 3.61 for sadness and 3.95 for happiness.In conclusion,the proposed method proves to be more effec-tive for a high degree of automation and fast emotional sentence generation,using a small emotional-speech data set. 展开更多
关键词 Emotional speech synthesis flowtron speech synthesis style transfer vietnamese speech
下载PDF
A Comparative Study of CNN-and Transformer-Based Visual Style Transfer 被引量:1
12
作者 魏华鹏 邓盈盈 +2 位作者 唐帆 潘兴甲 董未名 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第3期601-614,共14页
Vision Transformer has shown impressive performance on the image classification tasks.Observing that most existing visual style transfer(VST)algorithms are based on the texture-biased convolution neural network(CNN),h... Vision Transformer has shown impressive performance on the image classification tasks.Observing that most existing visual style transfer(VST)algorithms are based on the texture-biased convolution neural network(CNN),here raises the question of whether the shape-biased Vision Transformer can perform style transfer as CNN.In this work,we focus on comparing and analyzing the shape bias between CNN-and transformer-based models from the view of VST tasks.For comprehensive comparisons,we propose three kinds of transformer-based visual style transfer(Tr-VST)methods(Tr-NST for optimization-based VST,Tr-WCT for reconstruction-based VST and Tr-AdaIN for perceptual-based VST).By engaging three mainstream VST methods in the transformer pipeline,we show that transformer-based models pre-trained on ImageNet are not proper for style transfer methods.Due to the strong shape bias of the transformer-based models,these Tr-VST methods cannot render style patterns.We further analyze the shape bias by considering the influence of the learned parameters and the structure design.Results prove that with proper style supervision,the transformer can learn similar texture-biased features as CNN does.With the reduced shape bias in the transformer encoder,Tr-VST methods can generate higher-quality results compared with state-of-the-art VST methods. 展开更多
关键词 transformer convolution neural network visual style transfer comparative study
原文传递
ECGAN:Translate Real World to Cartoon Style Using Enhanced Cartoon Generative Adversarial Network
13
作者 Yixin Tang 《Computers, Materials & Continua》 SCIE EI 2023年第7期1195-1212,共18页
Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision.Image-to-image translation from real-world to cartoon domains poses issues such as a l... Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision.Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples,lack of good image translation,low feature extraction from the previous domain images,and lack of high-quality image translation from the traditional generator algorithms.To solve the above-mentioned issues,paired independent model,high-quality dataset,Bayesian-based feature extractor,and an improved generator must be proposed.In this study,we propose a high-quality dataset to reduce the effect of paired training samples on the model’s performance.We use a Bayesian Very Deep Convolutional Network(VGG)-based feature extractor to improve the performance of the standard feature extractor because Bayesian inference regu-larizes weights well.The generator from the Cartoon Generative Adversarial Network(GAN)is modified by introducing a depthwise convolution layer and channel attention mechanism to improve the performance of the original generator.We have used the Fréchet inception distance(FID)score and user preference score to evaluate the performance of the model.The FID scores obtained for the generated cartoon and real-world images are 107 and 76 for the TCC style,and 137 and 57 for the Hayao style,respectively.User preference score is also calculated to evaluate the quality of generated images and our proposed model acquired a high preference score compared to other models.We achieved stunning results in producing high-quality cartoon images,demonstrating the proposed model’s effectiveness in transferring style between authentic images and cartoon images. 展开更多
关键词 GAN CARTOON style transfer deep learning Bayesian neural network
下载PDF
融合中式元素的三维室内场景情绪化渲染
14
作者 盛家川 胡国林 李玉芝 《计算机科学与探索》 CSCD 北大核心 2024年第2期465-476,共12页
情绪具有主观性,利用计算机技术自动生成一个既满足真实性又与目标情绪匹配的虚拟室内场景是一项充满挑战的任务。目前缺乏对室内场景情感表达进行识别和评估的技术方法,且在充分考虑情感诉求的前提下,如何提升场景渲染结果真实性也是... 情绪具有主观性,利用计算机技术自动生成一个既满足真实性又与目标情绪匹配的虚拟室内场景是一项充满挑战的任务。目前缺乏对室内场景情感表达进行识别和评估的技术方法,且在充分考虑情感诉求的前提下,如何提升场景渲染结果真实性也是室内场景设计需要考虑的重要因素。针对上述问题,提出了一种融合中式元素的虚拟室内场景情绪化渲染算法。首先,利用深度学习算法从一个包含25 000张图片的家庭室内场景数据集中提取不同情绪的特征,训练情绪分类器,用于识别和评估渲染过程中虚拟室内场景的情绪表达。其次,为了保证渲染结果真实性,设计了一种场景物体纹理颜色真实性度量算法。然后,研究了根据目标情绪自动渲染虚拟室内场景的优化算法、融合中式元素的风格迁移算法,对场景中的物体进行细粒度的中式风格化处理,提升渲染结果的空间内涵、文化底蕴以及情感表达,增强视觉感染力。最后,在四个不同的室内场景中对该算法进行了实验,并通过对实验结果和用户调研数据的统计分析验证了该算法的正确性和有效性。 展开更多
关键词 虚拟现实 情感建模 中式元素 风格迁移
下载PDF
从学术概念到文学文体的迁移——文类语境下古代小说的演进逻辑
15
作者 张永葳 《新疆大学学报(哲学社会科学版)》 北大核心 2024年第3期123-128,共6页
先秦至汉代,小说是一种学术概念;汉末至唐,小说是一种广义的文类概念;由宋至清,小说不仅是一种文类概念,更是一种文体概念。这三种小说概念的演进契机是时代与文风的丕变。在中国古代的文类语境下,概念小说通过向各散文文体的迁移,最终... 先秦至汉代,小说是一种学术概念;汉末至唐,小说是一种广义的文类概念;由宋至清,小说不仅是一种文类概念,更是一种文体概念。这三种小说概念的演进契机是时代与文风的丕变。在中国古代的文类语境下,概念小说通过向各散文文体的迁移,最终形成了小说的各种子文体:向笔记文体迁移,形成了笔记体小说;向杂史杂传迁移,形成了传记体小说。这是古代小说在中国古代文类语境下的独特演进逻辑,也是小说文体在演进过程中自我扬弃、创新的本质体现,这对于我们思考古代文学中其他文体迁移现象或有启示意义。 展开更多
关键词 古代小说 笔记 杂史杂传 文体 迁移
下载PDF
数据驱动的电气工程因材施教育人模式研究
16
作者 张琴 周福娜 向阳 《教育教学论坛》 2024年第7期145-148,共4页
大数据正在成为推动教学创新的关键力量,提出数据驱动的电气工程因材施教育人模式。首先以终为始制定新工科下创新型电气工程师的培养目标;课前采用Kolb学习风格量表建立学生的学习风格模型,开展五阶段混合式教学和分组分层教学,促进学... 大数据正在成为推动教学创新的关键力量,提出数据驱动的电气工程因材施教育人模式。首先以终为始制定新工科下创新型电气工程师的培养目标;课前采用Kolb学习风格量表建立学生的学习风格模型,开展五阶段混合式教学和分组分层教学,促进学生全员进步;采用基于关系的迁移学习智能算法,将大数据挖掘的优势领域中的逻辑知识和能力关系迁移到薄弱领域,教师给予个性化指导,最终实现以数据驱动的大规模因材施教。 展开更多
关键词 数据驱动 华盛顿协议 因材施教 Kolb学习风格 迁移学习
下载PDF
铜凿剪纸风格化方法研究
17
作者 周磊晶 张雨昕 +1 位作者 雷睿 申奥怡 《图学学报》 CSCD 北大核心 2024年第1期126-138,共13页
铜凿剪纸是一种在铜箔上凿点并使用矿物质颜料上色的传统艺术形式,其成品光彩夺目。铜凿剪纸工艺复杂、制作时间长,对手工艺人的技术水平有很高的要求。为此,提出了一种铜凿剪纸风格化的方法,并设计、实现了一种计算机辅助铜凿剪纸设计... 铜凿剪纸是一种在铜箔上凿点并使用矿物质颜料上色的传统艺术形式,其成品光彩夺目。铜凿剪纸工艺复杂、制作时间长,对手工艺人的技术水平有很高的要求。为此,提出了一种铜凿剪纸风格化的方法,并设计、实现了一种计算机辅助铜凿剪纸设计工具,通过生成图像线稿、凿点图以及铜凿剪纸效果图,帮助手工艺人快速完成铜凿剪纸的创作和制作。将输入图像进行区域分割以提取图像的线条,生成图像线稿;定义了一种颜色损失函数,结合贪心算法和梯度下降法求解函数最小值得到最佳颜色映射方案;基于VGG-19网络对图像线条进行风格迁移,生成凿点图;将线条风格迁移图像与颜色迁移图像进行融合,生成铜凿剪纸效果图;基于PyQt5框架开发铜凿剪纸设计工具,设计了交互平台。实验结果表明,该方法能够实现图像的铜凿剪纸风格化,且效果接近真实的铜凿剪纸,支持手工艺人快速生成工艺流程中需要的图像线稿、凿点图以及效果图等相关材料,提高铜凿剪纸的制作效率,具有较高的应用价值。 展开更多
关键词 铜凿剪纸 风格化 计算机辅助设计工具 卷积神经网络 颜色迁移
下载PDF
基于GAN的场景文本艺术风格转换
18
作者 刘冰 《计算机与数字工程》 2024年第5期1523-1528,共6页
图像风格转移是将风格样式迁移到源图像中的目标区域以创建艺术排版的任务,论文研究如何对场景文本图像中的文字区域进行风格转换,以实现自动对广告或海报中的文字进行风格转换,降低艺术创作的成本并提高艺术风格的多样性。由于场景文... 图像风格转移是将风格样式迁移到源图像中的目标区域以创建艺术排版的任务,论文研究如何对场景文本图像中的文字区域进行风格转换,以实现自动对广告或海报中的文字进行风格转换,降低艺术创作的成本并提高艺术风格的多样性。由于场景文本图像中不同因素之间存在复杂的相互作用,先前很少有在保留原始文字内容和背景的同时进行文本风格转换的工作。该文提出了一个三阶段的框架,这是首个直接在原图进行程度可控的风格转换的网络,将原本对单个二值化字符进行风格转换的方法扩展到场景文本图像上的文字,并涉及到了图像修复的相关知识。首先使用风格转换网络只对场景文本图像中的文本风格进行转换,后利用字符擦除网络擦除原始字符重建背景图像,最后融合部分利用生成的前景图像和擦除字符后的背景图像生成最终风格转换后的结果图像。论文通过大量实验证明了该方法的有效性。 展开更多
关键词 深度学习 生成对抗网络(GAN) 场景文本图像 图像风格迁移 字体风格转换 字符擦除
下载PDF
多尺度语义信息无监督山水画风格迁移网络 被引量:1
19
作者 周粤川 张建勋 +2 位作者 董文鑫 高林枫 倪锦园 《计算机工程与应用》 CSCD 北大核心 2024年第4期258-269,共12页
针对图像转换类的生成对抗网络在处理无监督风格迁移任务时存在的纹理杂乱、生成图像质量差的问题,基于循环一致性损失提出了循环矫正多尺度评估生成对抗网络。首先在网络架构的设计上,基于图像的三层语义信息提出了多尺度评估网络架构... 针对图像转换类的生成对抗网络在处理无监督风格迁移任务时存在的纹理杂乱、生成图像质量差的问题,基于循环一致性损失提出了循环矫正多尺度评估生成对抗网络。首先在网络架构的设计上,基于图像的三层语义信息提出了多尺度评估网络架构,以此强化源域到目标域的迁移效果;其次在损失函数的改进上,提出了多尺度对抗损失以及循环矫正损失,用于以更严苛的目标引导模型的迭代优化方向,生成视觉质量更好的图片;最后为了预防模式崩溃的问题,在风格特征的编码阶段添加了注意力机制以提取重要的特征信息,在网络的各阶段引入ACON激活函数以加强网络的非线性表达能力,避免神经元坏死。实验结果表明,相比于CycleGAN、ACL-GAN,所提出方法在山水画风格迁移数据集上的FID值分别降低了21.80%和34.33%;为了验证模型的泛化能力,在Vangogh2Photo和Monet2Photo两个公开数据集上进行了泛化实验对比,FID值相比于两个对照网络分别降低了7.58%、18.14%和4.65%、6.99%。 展开更多
关键词 无监督风格迁移 生成对抗网络(GAN) 多尺度评估 CycleGAN
下载PDF
基于风格转换注意的循环一致风格转换
20
作者 张蕊儿 边晓航 +4 位作者 刘思远 刘滨 李建武 罗俊 祁明月 《河北科技大学学报》 CAS 北大核心 2024年第3期328-340,共13页
为了解决现有艺术风格转换方法难以同时高质量保持图像内容和转换风格模式的问题,引入一种新颖的风格转换注意网络(style-transition attention network, STANet),其包含2个关键部分:一是非对称注意力模块,用于确定参考图像的风格特征;... 为了解决现有艺术风格转换方法难以同时高质量保持图像内容和转换风格模式的问题,引入一种新颖的风格转换注意网络(style-transition attention network, STANet),其包含2个关键部分:一是非对称注意力模块,用于确定参考图像的风格特征;二是循环结构,用于保存图像内容。首先,采用双流架构,分别对风格和内容图像进行编码;其次,将注意力模块无缝集成到编码器中,生成风格注意表征;最后,将模块放入不同的卷积阶段,使编码器变成交错式的,促进从风格流到内容流的分层信息传播。此外,提出了循环一致损失,强制网络以整体方式保留内容结构和风格模式。结果表明:编码器优于传统的双流架构,STANet能用于交换具有任意风格的2幅图像的风格模式,合成更高质量的风格化图像,同时更好地保留了各自的内容。提出的带有风格转换注意的风格转换循环网络,模型风格化图像的内容细节更多,在泛化到任意风格方面获得了良好的效果。 展开更多
关键词 图像内容 风格转换 风格恢复 神经注意力 循环网络
下载PDF
上一页 1 2 23 下一页 到第
使用帮助 返回顶部