摘要
现有的基于度量的小样本图像分类模型展现了一定的小样本学习性能,然而这些模型往往忽略了原始数据被分类关键特征的提取。图像数据中与分类无关的冗余信息被融入小样本模型的网络参数中,容易造成基于度量方法的小样本图像分类性能瓶颈。针对这个问题,提出一种基于图神经网络的类别解耦小样本图像分类模型(VT-GNN),该模型结合图像自注意力与分类任务监督的变分自编码器作为图像嵌入模块,得到原始图像类别解耦特征信息,成为图结构中的一个图节点。通过一个多层感知机为节点之间构建具有度量信息的边特征,将一组小样本训练数据构造为图结构数据,借助图神经网络的消息传递机制实现小样本学习。在公开数据集Mini-Imagenet上,VT-GNN在分别5-way1-shot与5-way 5-shot设置中相较于基线图神经网络模型分别获得了17.9个百分点和16.25个百分点的性能提升。
Existing metric-based few-shot image classification models show some few-shot image learning performance.However,these models often ignore the extraction of key features of the original data being classified,and redundant infor-mation in the image data that is not related to classification is incorporated into the network parameters of the metric method,which easily causes a bottleneck in the performance of few-shot image classification based on metric methods.To address this problem,a category decoupled few-shot image classification model(VT-GNN)based on graph neural network is pro-posed,which combines image self-attention with a variational self-encoder supervised by a classification task as an embed-ding module to obtain information of the original image category decoupled features as a graph node in a graph structure.A set of few-shot training data is constructed as graph structure data by constructing edge features with metric information between nodes through a multilayer perceptron,and few-shot learning is achieved with the help of message passing mech-anism of graph neural network.On the publicly available dataset Mini-Imagenet,VT-GNN achieves 18.10 percentage points and 16.25 percentage points performance gains relative to the baseline graph neural network model in the 5-way 1-shot and 5-way 5-shot settings,respectively.
作者
邓戈龙
黄国恒
陈紫嫣
DENG Gelong;HUANG Guoheng;CHEN Ziyan(School of Computer,Guangdong University of Technology,Guangzhou 510006,China)
出处
《计算机工程与应用》
CSCD
北大核心
2024年第2期129-136,共8页
Computer Engineering and Applications
基金
国家自然科学基金广东联合基金(U1701262)
国家自然科学基金(U20A6003)。
关键词
小样本学习
图神经网络
变分自编码器
图像自注意力
few-shot learning
graph neural network
variational autoencoder
image self-attention