期刊文献+

y-Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning

原文传递
导出
摘要 With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph.In this paper,we propose y-Tuning,an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks.y-Tuning learns dense representations for labels y defined in a given task and aligns them to fixed feature representation.Without computing the gradients of text encoder at training phrase,y-Tuning is not only parameterefficient but also training-efficient.Experimental results show that for DeBERTaxxL with 1.6 billion parameters,y-Tuning achieves performance more than 96%of full fine-tuning on GLUE Benchmark with only 2%tunable parameters and much fewer training costs.
出处 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期107-116,共10页 中国计算机科学前沿(英文版)
基金 National Key R&D Program of China(No.2020AAA0108702) National Natural Science Foundation of China(Grant No.62022027).
  • 相关文献

参考文献1

共引文献154

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部