摘要
为了有效缓解基于交叉熵损失函数训练的传统文本摘要模型所面临的推理过程中性能下降、泛化性较低、生成过程中曝光偏差现象严重、生成的摘要与参考摘要文本相似度较低等问题,提出了一种新颖的训练方式,一方面,模型本身以beamsearch的方式生成候选集,以候选摘要的评估分数选取正负样本,在输出的候选集中以“argmax-贪心搜索概率值”和“标签概率值”构建2组对比损失函数;另一方面,设计作用于候选集句内的时序递推函数引导模型在输出每个单独的候选摘要时确保时序准确性,并缓解曝光偏差问题。实验表明,所提方法在CNN/DailyMail和Xsum公共数据集上的泛化性得到提升,Rouge与BertScore在CNN/DailyMail上达到47.54和88.51,在Xsum上达到了48.75和92.61。
To address the problems of the traditional text summarization models trained based on cross-entropy loss functions,such as degraded performance during inference,low generalization,serious exposure bias phenomenon during generation,and low similarity between the generated summary and the reference summary text,a novel training approach is proposed in this paper.On the one hand,the model itself generates a candidate set using beam search and selects positive and negative samples based on the evaluation scores of the candidate summaries.Two sets of contrastive loss functions are built using“argmax-greedy search probability values”and“label probability values”within the output candidate set.On the other hand,a time-series recursive function designed to operate on the candidate set’s sentences guides the model to ensure temporal accuracy when outputting each individual candidate summary and mitigates exposure bias.Our experiments show the method significantly improves the generalization performance on the CNN/Daily Mail and Xsum public datasets.The Rouge and Bert Score reach 47.54 and 88.51 respectively on CNN/Daily Mail while they reach 48.75 and 92.61 on Xsum.
作者
汤文亮
陈帝佑
桂玉杰
刘杰明
徐军亮
TANG Wenliang;CHEN Diyou;GUI Yujie;LIU Jieming;XU Junliang(School of Information Engineering,East China Jiaotong University,Nanchang 330013,China)
出处
《重庆理工大学学报(自然科学)》
CAS
北大核心
2024年第2期170-180,共11页
Journal of Chongqing University of Technology:Natural Science
基金
国家自然科学基金项目(52062016)
江西省重点研发计划(20203BBE53034)
江西省03专项及5G项目(20224ABC03A16)。
关键词
自然语言处理
文本摘要
对比学习
模型微调
natural language processing
text summarization
contrastive learning
model fine-tuning