期刊文献+

PCP-tuning:面向小样本学习的个性化连续提示调优

PCP-tuning:Personalized Continuous Prompt Tuning for Few-Shot Learning
下载PDF
导出
摘要 随着“提示学习”的兴起,预训练语言模型在少样本学习中取得了显著的表现,其中的关键问题是如何为每个训练样本构建合适的提示.近年来研究人员提出了一系列提示构造方法,有的构造离散型的提示,有的构造连续型的提示,但通常都是将一个提示应用到整个数据集上.然而,实验结果表明,很难找到一个能够适用于任务中所有样本的提示.为此,提出了一种用于小样本学习的个性化连续型提示调优方法(PCP-tuning),其目的是根据数据集中每个样本的语义来生成个性化的连续型提示.同时,还提出了两种校准技术来控制生成的连续型提示的分布,以获得更好的下游任务表现.最后在10个基准任务上进行大量实验,证明了新方法的优越性能. Pre-trained language models have achieved remarkable performance in few-shot learning with the rise of“prompt learning”,where the key problem is how to construct a suitable prompt for each example.Sample and prompt will be combined as a new input to language model(LM).A series of prompt construction methods have been proposed recently,some of these methods are for discrete prompt construction,and some focus on continuous prompt construction,both of them normally apply a unified prompt to all examples.However,the results show that it is hard to find a perfect unified prompt that works for all examples in a task,one prompt can only help LM assign the correct class to some samples in the downstream classification task and give the wrong result to others.To this end,we propose a novel personalized continuous prompt tuning(PCP-tuning)method to learn personalized prompts that are tailored to each sample’s semantic for few-shot learning.Two calibration techniques are proposed to control the distribution of generated prompts for better prompts.Extensive experimental results on ten benchmark tasks demonstrate the superior performance of our method.
作者 刘汀 蔡少填 陈小军 章秦 LIU Ting;CAI Shaotian;CHEN Xiaojun;ZHANG Qin(School of Computer Science and Technology,Shenzhen University,Shenzhen Guangdong 518071,China)
出处 《新疆大学学报(自然科学版)(中英文)》 CAS 2024年第1期59-68,共10页 Journal of Xinjiang University(Natural Science Edition in Chinese and English)
基金 国家自然科学基金“数据知识双驱动的小样本学习理论、方法及应用研究”(92270122) 广东省自然科学基金面上项目“自监督聚类方法及理论研究”(2023A1515012584) 深圳市基础研究面上项目“深度聚类算法及应用研究”(JCYJ20210324093000002).
关键词 自然语言处理 大型预训练模型 提示学习 文本分类 natural language processing large scale pre-trained models prompt learning text classification
  • 相关文献

参考文献4

二级参考文献33

共引文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部