摘要
深度学习模型通常限定在固定数据集中进行训练,训练完成之后模型无法随着时间而扩展其行为.将已训练好的模型在新数据上训练,会出现灾难性遗忘现象.持续学习是一种能够缓解深度学习模型灾难性遗忘的机器学习方法,它旨在不断扩展模型的适应能力,让模型能够在不同时刻学习不同任务的知识.目前,持续学习算法主要分为4大方面,分别是正则化方法、记忆回放方法、参数孤立方法和综合方法.对这4类方法的研究进展进行了系统地总结与分析,梳理了衡量持续学习算法性能的评估方法,讨论了持续学习的新兴研究趋势.
Deep learning models are usually limited to training on a fixed dataset,and the model cannot scale its behavior over time after training is complete.A trained model on new data will suffer from catastrophic forgetting.Continual learning is a machine learning method that can alleviate the catastrophic forgetting phenomenon of deep learning models.It aims to continuously expand the adaptability of the model,so that the model can learn knowledge of different tasks at different times.Current continual learning algorithms can be divided into four categories:regularization-based,parameter isolation based,replay based and synthesis method.This paper systematically summarizes and analyzes the research progress of these four methods,sorts out the existing assessment methods,and finally discusses the emerging research trends of continual learning.
作者
林钰尧
杜飞
杨云
LIN Yu-yao;DU Fei;YANG Yun(School of Software,Yunnan University,Kunming 650500,Yunnan,China)
出处
《云南大学学报(自然科学版)》
CAS
CSCD
北大核心
2023年第2期284-297,共14页
Journal of Yunnan University(Natural Sciences Edition)
基金
国家自然科学基金(61876166)
云南省科技厅重大科技专项(202002AD80001)
云南大学研究生科研创新项目(2021Z113).
关键词
持续学习
深度学习
计算机视觉
灾难性遗忘
continual learning
deep learning
computer vision
catastrophic forgetting