期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A Comparative Study of CNN-and Transformer-Based Visual Style Transfer 被引量:1
1
作者 Hua-Peng Wei Ying-Ying Deng +2 位作者 Fan Tang xing-jia pan Wei-Ming Dong 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第3期601-614,共14页
Vision Transformer has shown impressive performance on the image classification tasks.Observing that most existing visual style transfer(VST)algorithms are based on the texture-biased convolution neural network(CNN),h... Vision Transformer has shown impressive performance on the image classification tasks.Observing that most existing visual style transfer(VST)algorithms are based on the texture-biased convolution neural network(CNN),here raises the question of whether the shape-biased Vision Transformer can perform style transfer as CNN.In this work,we focus on comparing and analyzing the shape bias between CNN-and transformer-based models from the view of VST tasks.For comprehensive comparisons,we propose three kinds of transformer-based visual style transfer(Tr-VST)methods(Tr-NST for optimization-based VST,Tr-WCT for reconstruction-based VST and Tr-AdaIN for perceptual-based VST).By engaging three mainstream VST methods in the transformer pipeline,we show that transformer-based models pre-trained on ImageNet are not proper for style transfer methods.Due to the strong shape bias of the transformer-based models,these Tr-VST methods cannot render style patterns.We further analyze the shape bias by considering the influence of the learned parameters and the structure design.Results prove that with proper style supervision,the transformer can learn similar texture-biased features as CNN does.With the reduced shape bias in the transformer encoder,Tr-VST methods can generate higher-quality results compared with state-of-the-art VST methods. 展开更多
关键词 transformer convolution neural network visual style transfer comparative study
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部