期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Incremental Learning Based on Data Translation and Knowledge Distillation
1
作者 Tan Cheng Jielong Wang 《International Journal of Intelligence Science》 2023年第2期33-47,共15页
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of... Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned. 展开更多
关键词 Incremental Domain Learning data translation Knowledge Distillation Cat-astrophic Forgetting
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部