期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Multi-exit self-distillation with appropriate teachers
1
作者 wujie sun Defang CHEN +3 位作者 Can WANG Deshi YE Yan FENG Chun CHEN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第4期585-599,共15页
Multi-exit architecture allows early-stop inference to reduce computational cost,which can be used in resource-constrained circumstances.Recent works combine the multi-exit architecture with self-distillation to simul... Multi-exit architecture allows early-stop inference to reduce computational cost,which can be used in resource-constrained circumstances.Recent works combine the multi-exit architecture with self-distillation to simultaneously achieve high efficiency and decent performance at different network depths.However,existing methods mainly transfer knowledge from deep exits or a single ensemble to guide all exits,without considering that inappropriate learning gaps between students and teachers may degrade the model performance,especially in shallow exits.To address this issue,we propose Multi-exit self-distillation with Appropriate TEachers(MATE)to provide diverse and appropriate teacher knowledge for each exit.In MATE,multiple ensemble teachers are obtained from all exits with different trainable weights.Each exit subsequently receives knowledge from all teachers,while focusing mainly on its primary teacher to keep an appropriate gap for efficient knowledge transfer.In this way,MATE achieves diversity in knowledge distillation while ensuring learning efficiency.Experimental results on CIFAR-100,TinyImageNet,and three fine-grained datasets demonstrate that MATE consistently outperforms state-of-the-art multi-exit self-distillation methods with various network architectures. 展开更多
关键词 Multi-exit architecture Knowledge distillation Learning gap
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部