摘要
目的在图像分类中,通常先用深度网络提取特征,再基于这些特征进行分类,小样本图像分类也遵循此原则。但在特征提取为向量的过程中,信息丢失是一个常见问题,这可能导致模型遗漏关键的类别信息。为构建更丰富、更全面的特征表示,提出了基于基类的丰富表示特征提取器(rich representation feature extractor,RireFeat)。方法RireFeat通过在特征提取网络中构建不同层级间的基于注意力机制的信息流通渠道,使得被忽略的类别强相关信息重新出现在新提取的特征表示中,从而根据重要性有效地利用图像信息以构建全面的特征表示。同时,为了增强模型的判别能力,从多个尺度对特征进行度量,构建基于对比学习和深度布朗距离协方差的损失函数,拉近类别强相关特征向量之间的距离,同时使不同类别特征向量距离更远。结果为了验证所提特征提取器的有效性,在标准的小样本数据集MiniImagenet、TierdeImageNet和CUB(caltech-ucsd birds-200-2011)上进行了1-shot和5-shot的分类训练。实验结果显示,在MiniImageNet数据集上RireFeat在基于卷积的骨干网络中于1-shot和5-shot情况下分别比集合特征提取器(set-feature extractor,SetFeat)取得精度高出0.64%和1.10%。基于ResNet12(residual net⁃work)的结构中于1-shot和5-shot情况下分别比SetFeat精度高出1.51%和1.46%。CUB数据集在基于卷积的骨干网络中分别于1-shot和5-shot情况下提供比SetFeat高0.03%和0.61%的增益。在基于ResNet12的结构中于1-shot和5-shot情况下比SetFeat精度提高了0.66%和0.75%。在TieredImageNet评估中,基于卷积的骨干网络结构中于1-shot和5-shot情况下比SetFeat精度提高了0.21%和0.38%。结论所提出的RireFeat特征提取器能够有效地提高模型的分类性能,并且具有很好的泛化能力。
Objective The task of image classification based on few-shot learning refers to the training of a machine learning model that can effectively classify target images in the presence of limited target training samples available.The main challenge in few-shot image classification lies in the lack of a sufficient dataset,that is,only a small amount of labeled data is available for model training.Numerous advanced models have been proposed to tackle this challenge.A common and efficient strategy is to use deep networks as feature extractors.Deep networks are models that can automatically extract valuable features from input images.These networks can extract feature vectors from the image by using multilayer convolution and pooling operations.These feature vectors can be used to determine the category of the images and realize the goal of image classification.During model training,the feature extractor gradually learns to extract relevant information related to the category of the image,which can then be used as the feature vector.Using deep networks as feature extractors is a common and efficient strategy for few-shot image classification.Even when trained on limited labeled data,these models can achieve high accuracy by leveraging the power of deep learning.However,in the process of extracting features in the form of vectors,a risk of losing valuable information,including information strongly associated with the specific category,is evident.This risk can result in the disregard of crucial information that could substantially enhance image classification accuracy.The extracted feature vectors must encompass a maximum amount of category-specific information to enhance the accuracy of classification.This paper introduces a novel rich representation feature extractor(RireFeat)based on the base class to achieve an extensive and comprehensive image representation.Method This paper proposes a feature extractor called RireFeat to achieve highly comprehensive and class-specific feature extraction.RireFeat mainly aims to enhance the exchange and flow of information within the feature extractor,thereby facilitating the extraction of class-related features.Additionally,this method focuses on the multilayer feature vectors before and after the training of the feature extractor to ensure that the positive information for classification is retained during the feature extraction process.RireFeat employs a pyramid-like design that divides the feature extractor into multiple levels.Each level will receive the image coding information from its upper level,and the obtained information will flow to the next level after several convolution and pooling operations at this level.This hierarchical structure facilitates the transfer and fusion of information between different levels,maximizing the utilization of image extraction information within the feature extractor.The category correlation of feature vectors is subsequently deepened,leading to improved accuracy in image classification.Furthermore,RireFeat demonstrates superior generalization capabilities and can readily adapt to novel image classification tasks.Specifically,this paper starts with the process of feature extraction.Local features related to categories are extracted after the image information traverses a multilayered hierarchical structure,while information unrelated to categories is ignored.However,this process may also lead to the removal of certain category-specific information.The rich representation feature extractor(RireFeat),which integrates a small shaping module to add the shaping module at a distance across the hierarchy,is proposed in this paper to address this issue.Therefore,image information can still flow and merge with each other after crossing the hierarchy.This design enables the network to pay additional attention to changes in features before and after each level,facilitating the effective extraction of local features while disregarding information that is unrelated to the specific category.Consequently,this approach notably enhances the classification accuracy.Simultaneously,this paper also introduces the idea of contrastive learning into few-shot image classification and combines it with deep Brownian distance covariance to measure image features from multiple scales to contrastive loss functions.This method aims to bring the embeddings of the same distribution closer while pushing those of different distributions farther away,thereby improving classification accuracy.In the experiment,the SetFeat method was used to extract the feature set for each image.In terms of training,similar to other fewshot image learning methods,the entire network is initially pre-trained and then finetuned in the meta-training stage.In the meta-training phase,the classification is performed by calculating the distance between the query(test)and support(training)sample sets.Result 1-shot and 5-shot classification training are conducted on the standard small sample datasets,such as MiniImageNet,TierdeImageNet,and CUB,to verify the validity of the proposed feature extraction structure.Experimental results show that RireFeat achieves 0.64%and 1.10%higher accuracy than SetFeat in a 1-shot and 5-shot convolution-based backbone network on the MiniImageNet dataset,respectively.The ResNet12-based structure is 1.51%and 1.46%higher than SetFeat in 1-shot and 5-shot cases,respectively.CUB datasets provide gains 0.03%and 0.61%higher than SetFeat at 1-shot and 5-shot,respectively,in convolution-based backbone networks,demonstrating improvements of 0.66%and 0.75%over SetFeat in 1-shot and 5-shot scenarios,respectively,in the Resnet12-based structure.In TieredImageNet evaluation,the convolution-based backbone network architecture achieves 0.21%and 0.38%improvement over SetFeat under 1-shot and 5-shot conditions,respectively.Conclusion This paper proposes a rich representation feature extractor(RireFeat)to obtain a rich,comprehensive,and accurate feature representation for few-shot image classification.Different from traditional feature extractors and feature extraction forms,RireFeat increases the flow of information between feature extraction networks by paying attention to the changes in features before and after network transmission.RireFeat effectively reintegrates the category information lost during feature extraction into the feature representation.In addition,the concept of contrastive learning combined with deep Brownian distance covariance is introduced into the few-shot learning image classification to learn additional categorical representations for each image.Therefore,this extractor can capture highly nuanced differences between images from various categories,resulting in improved classification performance.In addition,the feature vector set is extracted from the image to provide strong support for the subsequent classification task.The proposed method achieves high classification accuracy on the MiniImageNet,TieredImageNet,and CUB datasets.Moreover,this paper verifies the universality of the proposed method with the current popular deep learning backbones,such as convolutional and residual backbones,highlighting its applicability to current state-of-the-art models.
作者
王雪松
吕理想
程玉虎
王浩宇
Wang Xuesong;Lyu Lixiang;Cheng Yuhu;Wang Haoyu(School of Information and Control Engineering,China University of Mining and Technology,Xuzhou 221116,China)
出处
《中国图象图形学报》
CSCD
北大核心
2024年第11期3371-3382,共12页
Journal of Image and Graphics
基金
国家自然科学基金项目(62303468)
江苏省自然科学基金项目(BK20221116)
中国博士后科学基金项目(2023M733757)
江苏省卓越博士后计划项目(2022ZB530)。
关键词
小样本图像分类
注意力机制
多尺度度量
特征表示
对比学习
深度布朗距离协方差
few-shot image classification
attention mechanism
multi-scale measurement
feature representation
con⁃strastive learning
deep Brovonian distance convariance