期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Incremental Learning Based on Data Translation and Knowledge Distillation
1
作者 Tan Cheng Jielong Wang 《International Journal of Intelligence Science》 2023年第2期33-47,共15页
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of... Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned. 展开更多
关键词 Incremental Domain Learning Data Translation Knowledge Distillation Cat-astrophic Forgetting
下载PDF
Combat data shift in few-shot learning with knowledge graph
2
作者 Yongchun ZHU Fuzhen ZHUANG +4 位作者 Xiangliang ZHANG Zhiyuan QI Zhiping SHI Juan CAO Qing HE 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期101-111,共11页
Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected p... Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected performance in the scenario where all samples are drawn from the same distributions (i.i.d. observations). However, in real-world applications, few-shot learning paradigm often suffers from data shift, i.e., samples in different tasks, even in the same task, could be drawn from various data distributions. Most existing few-shot learning approaches are not designed with the consideration of data shift, and thus show downgraded performance when data distribution shifts. However, it is non-trivial to address the data shift problem in few-shot learning, due to the limited number of labeled samples in each task. Targeting at addressing this problem, we propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations with the help of knowledge graph. The data shift within/between tasks can thus be combated by the combination of task-shared and task-specific representations. The proposed model is evaluated on popular benchmarks and two constructed new challenging datasets. The evaluation results demonstrate its remarkable performance. 展开更多
关键词 few-shot data shift knowledge graph
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部