摘要
相较于一次性获得所有训练数据的批量学习,小样本类增量学习具有更高的现实意义,它既能使机器学习更接近人类智能水平,又可以减少深度学习模型对大量训练数据的依赖。为缓解小样本类增量学习对旧类的遗忘,同时使分类过程不受任何因素的干扰,提出一种基于因果关系的小样本类增量学习策略。首先,采用干预式小样本学习剔除预训练知识所产生的混淆影响,使样本特征与分类标签具有真实的因果关系;其次,采用基于因果效应的类增量学习方法,通过在旧数据与最终标签间建立通路达到数据重放的因果效应,缓解灾难性遗忘;最后,采用随机情节选择策略增强特征的可扩展性,使它适应后续的增量学习。在miniImageNet与CIFAR100数据集上的实验结果表明,所提方法在1~8轮的增量学习过程中取得了最优的平均精度,同时具有一定稳定性、可解释性。
Compared with one-time batch learning,few-shot class-incremental learning has higher practical significance.It brings machine learning closer to human intelligence level,and reduces the dependence of deep learning models on a large amount of training data.To alleviate the forgetting of old classes in few-shot class-incremental learning without any interference in the classification process,a few-shot class-incremental learning based on causality was proposed.Firstly,interventional few-shot learning was used to eliminate the confusion caused by pre-training knowledge,so that the sample characteristics and classification labels had a real causal relationship.Secondly,a class-incremental learning method based on causal effect was used to alleviate catastrophic forgetting by establishing a path between the old data and the final label.Finally,a random scenario selection strategy was used to enhance the scalability of the features to accommodate subsequent incremental learning.Experimental results on miniImageNet and CIFIAR100 datasets show that the proposed method achieves the optimal average accuracy during one to eight rounds of incremental learning,and at the same time it has certain stability and interpretability.
作者
刘冰瑶
刘进锋
LIU Bingyao;LIU Jinfeng(School of Information Engineering,Ningxia University,Yinchuan Ningxia 750021,China)
出处
《计算机应用》
CSCD
北大核心
2024年第S01期54-59,共6页
journal of Computer Applications
基金
宁夏自然科学基金资助项目(2023AAC03126)。
关键词
小样本类增量学习
可解释性
因果推断
增量学习
神经网络
few-shot class-incremental learning
interpretability
causal inference
incremental learning
neural network