摘要
文本自动摘要旨在从文本信息中提取主要语句以压缩信息。现有生成式自动摘要方法无法充分利用预训练模型对原文语义进行学习,导致生成内容易丢失重要信息,当面对样本数量较少的数据集时容易发生过拟合。为了解决此类问题并获得更好的微调性能,文中使用预训练模型mT5(multilingual T5)作为基线,通过结合R-drop(Regularized dropout)对模型微调进行强化正则来提高模型学习能力,同时利用Sparse softmax减少预测生成的模糊性来确保输出准确度。模型在中文数据集LCSTS和CSL上通过计算BLEU(Bilingual Evaluation Understudy)进行优化方法超参数测试,并采用Rouge作为评测指标分别对数据集进行了不同数量级的评测。实验结果表明,经过优化的预训练模型能够更好地学习原文语义表征,在小样本情况下模型能够保持较好的拟合效果,并且能够生成实用性较高的结果。
Automatic text summarization aims to extract the main statements from text information for the purpose of compressing information.Existing generative automatic summarization methods do not take full advantage of the pre-trained model to learn the semantics of the original text,resulting in the loss of important information in the generated content,when the data set with a small number of samples is often prone to overfitting.In order to solve such problems and obtain better fine-tuning performance,the pre-trained model mT5(multilingual T5)is used as a baseline to improve the learning ability of the model by combining R-drop(Regularized dropout)with reinforced regularity for model fine-tuning,and Sparse softmax is used to reduce the ambiguity of prediction generation to ensure the accuracy of the output.The model calculates BLEU(Bilingual Evaluation Understudy)for hyperparameter test on Chinese data sets LCSTS and CSL,and uses Rouge as evaluation index to evaluate data sets of different orders of magnitude.The experimental results show that the optimized pre-trained model can better learn the semantic representation of the original text,and the model can maintain a good fit in the small samples and generate more practical results.
作者
李清
万卫兵
LI Qing;WAN Weibing(School of Electronic and Electrical Engineering,Shanghai University of Engineering Science,Shanghai 201620,China)
出处
《电子科技》
2024年第7期16-24,共9页
Electronic Science and Technology
基金
科技创新2030“新一代人工智能”重大项目(2020AAA0109300)。
关键词
文本自动摘要
文本生成
预训练模型
小样本数据
强化正则
稀疏化输出
语义表征学习
mT5
automatic text summarization
text generation
pre-trained model
small sample data
reinforced regularity
sparse output
semantic representation learning
mT5