摘要
【目的】针对现有基于标注和基于文本生成的事件抽取模型存在的不足,提出一种使用自动构造模板引出预训练语言模型知识的事件联合抽取模型。【方法】基于事件提示符设计模板自动构造策略以生成统一的提示模板,在编码层为事件提示符引入事件提示编码层,而后接入预训练的BART模型捕捉句子的语义信息,并生成对应的预测序列,从预测序列中提取对应事件类型的触发词和论元,实现事件触发词和论元的联合抽取。【结果】在包含复杂事件信息文本的事件数据集中,事件触发词抽取和事件论元抽取的F1值分别达到77.67%和65.06%,相较于最优的基准方法分别提升了2.43和1.62个百分点。【局限】模型仅局限于句子级文本,且仅在编码层对提示符进行调优。【结论】本文模型基于提示符调优,能够在减少模板构建成本的同时保持相同甚至更优的性能,并且能够识别具有复杂事件信息的文本,有效提升了事件元素多标签分类的效果。
[Objective]This study proposes a joint event extraction model employing an automatically constructed template to leverage the knowledge of pre-trained language models,aiming to improve the existing event extraction models relying on sequence labeling and text generation.[Methods]Firstly,we designed an automatic template construction strategy based on the Event Prompt to generate unified prompt templates.Then,we introduced the Event Prompt Embedding layer for the Event Prompt at the encoding level.Next,we used the BART model to capture the semantic information of the sentence and generated the corresponding prediction sequence.Finally,we jointly extracted trigger words and event arguments from the prediction sequences.[Results]In a dataset containing complex event information,the F1 values for event trigger and argument extraction reached 77.67%and 65.06%,which were 2.43%and 1.62%higher than the optimal baseline method.[Limitations]The proposed model could only work with sentence-level texts and optimize the Event Prompt at the encoding layer.[Conclusions]The proposed model can reduce the template construction cost while maintaining the same or even better performance.The model could recognize text with complex event information and improve the multi-label classification for event elements.
作者
陈诺
李旭晖
Chen Nuo;Li Xuhui(School of Information Management,Wuhan University,Wuhan 430072,China;Big Data Institute,Wuhan University,Wuhan 430072,China)
出处
《数据分析与知识发现》
CSCD
北大核心
2023年第6期86-98,共13页
Data Analysis and Knowledge Discovery
基金
国家自然科学基金重大研究计划(项目编号:91646206)
国家社会科学基金重大项目(项目编号:21&ZD334)的研究成果之一。
关键词
中文事件抽取
预训练语言模型
提示学习
联合学习
Chinese Event Extraction
Pre-trained Language Model
Prompt Learning
Joint Learning