Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by ut...Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.展开更多
本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像...本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像中细微的视觉变化;其次,提出了Swin-AK Transformer,增强了模型对局部信息的提取和处理能力。此外,本文设计了双交叉注意力融合模块,结合空间注意力和通道注意力机制,融合了手工特征与深度特征,实现了更加精确的图像质量预测。实验结果表明,在SPAQ和LIVE-C数据集上,皮尔森线性相关系数分别达到0.932和0.885,斯皮尔曼等级排序相关系数分别达到0.929和0.858。上述结果证明了本文提出的方法能够有效地预测智能手机拍摄图像的质量。展开更多
近年来,Transformer在众多监督式计算机视觉任务中取得了显著进展,然而由于高质量医学标注图像的缺乏,其在半监督图像分割领域的性能仍有待提高。为此,提出了一种基于多尺度和多视图Transformer的半监督医学图像分割框架:MSMVT(multi-sc...近年来,Transformer在众多监督式计算机视觉任务中取得了显著进展,然而由于高质量医学标注图像的缺乏,其在半监督图像分割领域的性能仍有待提高。为此,提出了一种基于多尺度和多视图Transformer的半监督医学图像分割框架:MSMVT(multi-scale and multi-view transformer)。鉴于对比学习在Transformer的预训练中取得的良好效果,设计了一个基于伪标签引导的多尺度原型对比学习模块。该模块利用图像金字塔数据增强技术,为无标签图像生成富有语义信息的多尺度原型表示;通过对比学习,强化了不同尺度原型之间的一致性,从而有效缓解了由标签稀缺性导致的Transformer训练不足的问题。此外,为了增强Transformer模型训练的稳定性,提出了多视图一致性学习策略。通过弱扰动视图,以校正多个强扰动视图。通过最小化不同视图之间的输出差异性,使得模型能够对不同扰动保持多层次的一致性。实验结果表明,当仅采用10%的标注比例时,提出的MSMVT框架在ACDC、LIDC和ISIC三个公共数据集上的DSC图像分割性能指标分别达到了88.93%、84.75%和85.38%,优于现有的半监督医学图像分割方法。展开更多
基金supported in part by the National Natural Science Foundation of China under Grants 61502162,61702175,and 61772184in part by the Fund of the State Key Laboratory of Geo-information Engineering under Grant SKLGIE2016-M-4-2+4 种基金in part by the Hunan Natural Science Foundation of China under Grant 2018JJ2059in part by the Key R&D Project of Hunan Province of China under Grant 2018GK2014in part by the Open Fund of the State Key Laboratory of Integrated Services Networks under Grant ISN17-14Chinese Scholarship Council(CSC)through College of Computer Science and Electronic Engineering,Changsha,410082Hunan University with Grant CSC No.2018GXZ020784.
文摘Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.
文摘本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像中细微的视觉变化;其次,提出了Swin-AK Transformer,增强了模型对局部信息的提取和处理能力。此外,本文设计了双交叉注意力融合模块,结合空间注意力和通道注意力机制,融合了手工特征与深度特征,实现了更加精确的图像质量预测。实验结果表明,在SPAQ和LIVE-C数据集上,皮尔森线性相关系数分别达到0.932和0.885,斯皮尔曼等级排序相关系数分别达到0.929和0.858。上述结果证明了本文提出的方法能够有效地预测智能手机拍摄图像的质量。
文摘近年来,Transformer在众多监督式计算机视觉任务中取得了显著进展,然而由于高质量医学标注图像的缺乏,其在半监督图像分割领域的性能仍有待提高。为此,提出了一种基于多尺度和多视图Transformer的半监督医学图像分割框架:MSMVT(multi-scale and multi-view transformer)。鉴于对比学习在Transformer的预训练中取得的良好效果,设计了一个基于伪标签引导的多尺度原型对比学习模块。该模块利用图像金字塔数据增强技术,为无标签图像生成富有语义信息的多尺度原型表示;通过对比学习,强化了不同尺度原型之间的一致性,从而有效缓解了由标签稀缺性导致的Transformer训练不足的问题。此外,为了增强Transformer模型训练的稳定性,提出了多视图一致性学习策略。通过弱扰动视图,以校正多个强扰动视图。通过最小化不同视图之间的输出差异性,使得模型能够对不同扰动保持多层次的一致性。实验结果表明,当仅采用10%的标注比例时,提出的MSMVT框架在ACDC、LIDC和ISIC三个公共数据集上的DSC图像分割性能指标分别达到了88.93%、84.75%和85.38%,优于现有的半监督医学图像分割方法。