摘要
针对少样本文本分类任务,提出提示学习增强的分类算法(EPL4FTC)。该算法将文本分类任务转换成基于自然语言推理的提示学习形式,在利用预训练语言模型先验知识的基础上实现隐式数据增强,并通过两种粒度的损失进行优化。为捕获下游任务中含有的类别信息,采用三元组损失联合优化方法,并引入掩码语言模型任务作为正则项,提升模型的泛化能力。在公开的4个中文文本和3个英文文本分类数据集上进行实验评估,结果表明EPL4FTC方法的准确度明显优于所对比的基线方法。
An enhanced prompt learning method(EPL4FTC)for few-shot text classification task is proposed.This algorithm first converts the text classification task into the form of prompt learning based on natural language inference.Thus,the implicit data enhancement is achieved based on the prior knowledge of pre-training language models and the algorithm is optimized by two losses with different granularities.Moreover,to capture the category information of specific downstream tasks,the triple loss is used for joint optimization.The masked-language model is incorporated as a regularizer to improve the generalization ability.Through the evaluation on four Chinese and three English text classification datasets,the experimental results show that the classification accuracy of the proposed EPL4FTC is significantly better than the other compared baselines.
作者
李睿凡
魏志宇
范元涛
叶书勤
张光卫
LI Ruifan;WEI Zhiyu;FAN Yuantao;YE Shuqin;ZHANG Guangwei(School of Artificial Intelligence,Beijing University of Posts and Telecommunications,Beijing 100876;Engineering Research Center of Information Networks,Ministry of Education,Beijing 100876;Key Laboratory of Interactive Technology and Experience System,Ministry of Culture and Tourism,Beijing 100876;School of Computer Science,Beijing University of Posts and Telecommunications,Beijing 100876)
出处
《北京大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2024年第1期1-12,共12页
Acta Scientiarum Naturalium Universitatis Pekinensis
基金
国家自然科学基金(62076032)资助。
关键词
预训练语言模型
少样本学习
文本分类
提示学习
三元组损失
pre-trained language model
few-shot learning
text classification
prompt learning
triplet loss