摘要
图像分类作为一种常见的视觉识别任务,有着广阔的应用场景。在处理图像分类问题时,传统的方法通常使用卷积神经网络,然而,卷积网络的感受野有限,难以建模图像的全局关系表示,导致分类精度低,难以处理复杂多样的图像数据。为了对全局关系进行建模,一些研究者将Transformer应用于图像分类任务,但为了满足Transformer的序列化和并行化要求,需要将图像分割成大小相等、互不重叠的图像块,破坏了相邻图像数据块之间的局部信息。此外,由于Transformer具有较少的先验知识,模型往往需要在大规模数据集上进行预训练,因此计算复杂度较高。为了同时建模图像相邻块之间的局部信息并充分利用图像的全局信息,提出了一种基于Depth-wise卷积的视觉Transformer(Efficient Pyramid Vision Transformer,EPVT)模型。EPVT模型可以实现以较低的计算成本提取相邻图像块之间的局部和全局信息。EPVT模型主要包含3个关键组件:局部感知模块(Local Perceptron Module,LPM)、空间信息融合模块(Spatial Information Fusion,SIF)和“+卷积前馈神经网络(Convolution Feed-forward Network,CFFN)。LPM模块用于捕获图像的局部相关性;SIF模块用于融合相邻图像块之间的局部信息,并利用不同图像块之间的远距离依赖关系,提升模型的特征表达能力,使模型学习到输出特征在不同维度下的语义信息;CFFN模块用于编码位置信息和重塑张量。在图像分类数据集ImageNet-1K上,所提模型优于现有的同等规模的视觉Transformer分类模型,取得了82.6%的分类准确度,证明了该模型在大规模数据集上具有竞争力。
Deep learning-based image classification models have been successfully applied in various scenarios.The current image classification models can be categorized into two classes:the CNN-based classifiers and the Transformer-based classifiers.Due to its limited receptive field,the CNN-based classifiers cannot model the global relation of image,which decreases the classification accuracy.While the Transformer-based classifiers usually segmente the image into non-overlapping image patches with equal size,which harms the local information between each pair of adjacent image patches.Additionally,the Transformer-based classification models often require pre-training on large datasets,resulting in high computational costs.To tackle these problems,an efficient pyramid vision Transformer(EPVT)based on depth-wise convolution is proposed in this paper to extract both the local and glo-bal information between adjacent image patches at a low computational cost.The EPVT model consists of three key components:local perception module(LP),spatial information fusion module(SIF)and convolutional feed-forward network module(CFFN).The LP module is used to capture the local correlation of image patches.SIF module is used to fuse local information between adjacent image patches and improve the feature expression ability of the proposed EPVT by utilizing the long-distance dependence between different image patches.CFFN module is used to encode the location information and reconstruct tensors between feature image patches.To validate the proposed EPVT model’s performance,various experiments are conducted on the benchmark datasets,and experimental results show the EPVT achieves 82.6%classification accuracy on ImageNet-1K,which outperforms most of the SOTA models with lower computational complexity.
作者
张峰
黄仕鑫
花强
董春茹
ZHANG Feng;HUANG Shixin;HUA Qiang;DONG Chunru(Hebei Key Laboratory of Machine Learning and Computational Intelligence,College of Mathematics and Information Science,Hebei University,Baoding,Hebei 071002,China)
出处
《计算机科学》
CSCD
北大核心
2024年第2期196-204,共9页
Computer Science
基金
科技部重点研发项目(2022YFE0196100)
河北省自然科学基金面上项目(F2018201115)
河北省教育厅科学技术研究重点项目(ZD2019021)
河北大学高层次创新人才科研启动经费项目。