期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Two-Stage Approach for Targeted Knowledge Transfer in Self-Knowledge Distillation
1
作者 Zimo Yin Jian Pu +1 位作者 Yijie Zhou Xiangyang Xue 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2024年第11期2270-2283,共14页
Knowledge distillation(KD) enhances student network generalization by transferring dark knowledge from a complex teacher network. To optimize computational expenditure and memory utilization, self-knowledge distillati... Knowledge distillation(KD) enhances student network generalization by transferring dark knowledge from a complex teacher network. To optimize computational expenditure and memory utilization, self-knowledge distillation(SKD) extracts dark knowledge from the model itself rather than an external teacher network. However, previous SKD methods performed distillation indiscriminately on full datasets, overlooking the analysis of representative samples. In this work, we present a novel two-stage approach to providing targeted knowledge on specific samples, named two-stage approach self-knowledge distillation(TOAST). We first soften the hard targets using class medoids generated based on logit vectors per class. Then, we iteratively distill the under-trained data with past predictions of half the batch size. The two-stage knowledge is linearly combined, efficiently enhancing model performance. Extensive experiments conducted on five backbone architectures show our method is model-agnostic and achieves the best generalization performance.Besides, TOAST is strongly compatible with existing augmentation-based regularization methods. Our method also obtains a speedup of up to 2.95x compared with a recent state-of-the-art method. 展开更多
关键词 Cluster-based regularization iterative prediction refinement model-agnostic framework self-knowledge distillation(SKD) two-stage knowledge transfer
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部