期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
y-Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning
1
作者 Yitao LIU chenxin an Xipeng QIU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期107-116,共10页
With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses o... With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph.In this paper,we propose y-Tuning,an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks.y-Tuning learns dense representations for labels y defined in a given task and aligns them to fixed feature representation.Without computing the gradients of text encoder at training phrase,y-Tuning is not only parameterefficient but also training-efficient.Experimental results show that for DeBERTaxxL with 1.6 billion parameters,y-Tuning achieves performance more than 96%of full fine-tuning on GLUE Benchmark with only 2%tunable parameters and much fewer training costs. 展开更多
关键词 pre-trained model lightweight fine-tuning paradigms label representation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部