期刊文献+

结合提示学习和Qwen大语言模型的裁判文书摘要方法

Method for judicial document summarization by combining prompt learning and Qwen large language models
原文传递
导出
摘要 尽管大语言模型在新闻、艺术等领域的文本摘要任务上取得了良好的效果,但由于大语言模型缺乏对司法领域知识的学习,同时难以理解裁判文书的结构特征和逻辑关系,导致生成的裁判文书摘要质量不佳。该文提出结合提示学习和Qwen大语言模型的裁判文书摘要方法,将裁判文书数据作为SFT(supervised fine-tuning)技术对大语言模型微调的输入,增强其法律领域适用性;同时设计融入结构信息与角色指令的提示模板,以优化摘要生成,使其更精准地反映文书结构特征与逻辑关系。实验结果表明,该方法在ROUGE-1、 ROUGE-2和ROUGE-L的F1值上比基线模型分别提升了21.44%、 28.50%和28.97%,说明大语言模型经裁判文书数据微调并引入结构信息后,在裁判文书摘要任务中展现了卓越的性能与巨大的应用潜力。 [Objective] The increasing maturity of large language model technology has facilitated its widespread application in downstream tasks across various vertical fields.Large language models have exhibited beneficial performance in text summarization tasks in general fields,such as news and art.However,the highly specific language style in the judicial field and the unique complexity of judicial documents in terms of structure and logic make it difficult for large language models to generate judicial document summaries.This study aims to combine prompt learning with large language models to explore their performance in summarizing judicial documents.Prompt templates containing structural information and judicial documents are used as inputs for fine-tuning large language models.As a result,large language models can generate judicial document summaries that adhere to judicial language styles and the structural and logical complexities of judicial documents.[Methods] This study proposes a judicial document summary method that combines prompt learning and the Qwen large language model.Judicial document data are used as the input for fine-tuning a large language model using supervised fine-tuning technology to enhance its applicability in the judicial field.Simultaneously,prompt templates that incorporate structural information and role instructions are designed to optimize summary generation to more accurately reflect the structural characteristics and logical relationships of documents.According to the characteristics of the pretraining data format of the large language model,the fine-tuning data were constructed in the form of question-answer pairs.[Results] The experimental results show that the proposed method improves the F1 of the baseline model by 21.44%,28.50%,and 28.97% in ROUGE-1,ROUGE-2,and ROUGE-L,respectively,and exceeds all of the comparison models.The ablation experiment demonstrated that the summary generation method using prompt learning was superior to the method without prompt learning for all indicators,and the performance of summarization generated by the large language model utilizing prompt learning was significantly enhanced.The case demonstration reveals that after prompt learning is used to enhance the perception of structural information in the judgment document by the large language model,the judgment document summary generated by this model can better capture and retain key information in the judgment document.Moreover,the language style of this model is closer to that of a real judgment document summary,which further illustrates the effectiveness of the proposed method.[Conclusions] This study integrates the structural information of a judgment document into the task of generating a judgment document summary using a large language model in the form of prompt templates.Prompt templates containing structural information are used to assist the large language model in summarization generation.Therefore,the model can focus on the key information in the judgment document and capture deeper semantic logical relationships.The results demonstrate that after fine-tuning the large language model with judicial document data and introducing structural information,the model demonstrated excellent performance and great application potential in the judicial document summary task.The proposed method can effectively enhance the capability of a large language model in the field of judicial document summaries.
作者 李佳沂 黄瑞章 陈艳平 林川 秦永彬 LI Jiayi;HUANG Ruizhang;CHEN Yanping;LIN Chuan;QIN Yongbin(Text Computing&Cognitive Intelligence Engineering Research Center of National Education Ministry,College of Computer Science and Technology,Guizhou University,Guiyang 550025,China;State Key Laboratory of Public Big Data,Guizhou University,Guiyang 550025,China)
出处 《清华大学学报(自然科学版)》 EI CAS CSCD 北大核心 2024年第12期2007-2018,共12页 Journal of Tsinghua University(Science and Technology)
基金 国家自然科学基金资助项目(62066008) 贵州省科学技术基金重点资助项目(黔科合基础[2020]1Z055) 贵州省科学技术基金重点资助项目(黔科合重大专项字[2024]003)。
关键词 裁判文书摘要 文本摘要 大语言模型 提示学习 referee's decision summary summarization large language model prompt learning
  • 相关文献

参考文献3

二级参考文献7

共引文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部