期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Filter Bank Networks for Few-Shot Class-Incremental Learning
1
作者 Yanzhao Zhou Binghao Liu +1 位作者 Yiran Liu Jianbin Jiao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期647-668,共22页
Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the d... Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials. 展开更多
关键词 Deep learning incremental learning few-shot learning Filter Bank Networks
下载PDF
视觉语言模型引导的文本知识嵌入的小样本增量学习
2
作者 姚涵涛 余璐 徐常胜 《软件学报》 EI CSCD 北大核心 2024年第5期2101-2119,共19页
真实场景往往面临数据稀缺和数据动态变化的问题,小样本增量学习的目的是利用少量数据推理数据知识并减缓模型对于旧知识的灾难性遗忘.已有的小样本增量学习的算法(CEC和FACT等)主要是利用视觉特征来调整特征编码器或者分类器,实现模型... 真实场景往往面临数据稀缺和数据动态变化的问题,小样本增量学习的目的是利用少量数据推理数据知识并减缓模型对于旧知识的灾难性遗忘.已有的小样本增量学习的算法(CEC和FACT等)主要是利用视觉特征来调整特征编码器或者分类器,实现模型对于新数据的迁移和旧数据的抗遗忘.但是少量数据的视觉特征往往难以建模一个类别的完整特征分布,导致上述算法的泛化能力较弱.相比于视觉特征,图像类别描述的文本特征具有较好的泛化性和抗遗忘性.因此,在视觉语言模型的基础上,研究基于文本知识嵌入的小样本增量学习,通过在视觉特征中嵌入具有抗遗忘能力的文本特征,实现小样本增量学习中新旧类别数据的有效学习.具体而言,在基础学习阶段,利用视觉语言模型抽取图像的预训练视觉特征和类别的文本描述,并通过文本编码器实现预训练视觉特征到文本空间的映射.进一步利用视觉编码器融合学习到的文本特征和预训练视觉特征抽象具有高辨别能力的视觉特征.在增量学习阶段,提出类别空间引导的抗遗忘学习,利用旧数据的类别空间编码和新数据特征微调视觉编码器和文本编码器,实现新数据知识学习的同时复习旧知识.在4个数据集(CIFAR-100,CUB-200,Car-196和mini Image Net)上验证算法的有效性,证明基于视觉语言模型文本知识嵌入可以在视觉特征的基础上进一步提升小样本增量学习的鲁棒性. 展开更多
关键词 小样本增量学习 视觉语言模型 文本知识嵌入 类别空间引导的抗遗忘学习
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部