期刊文献+

一种端到端的事件共指消解方法

An End-to-end Event Coreference Resolution Method
下载PDF
导出
摘要 事件共指消解任务主要是判断不同事件提及是否指向同一件事件。事件共指消解不仅能有效缓解事件抽取任务中存在的信息冗余问题,而且为事件内容补全提供了有效途径。尽管许多学者利用深度学习方法对事件共指消解进行了大量研究。但是大部分事件共指消解模型中仍然存在显式信息表示不足、论元噪声引入以及共指事件分布稀疏等问题。针对上述问题,提出了一种利用显式论元信息和重构事件链的端到端事件共指消解方法。首先,使用名为OneIE事件抽取模型提取事件的触发词和论元以获取事件的结构化信息;随后,使用Transformer编码器对事件提及上下文进行表示,并将置信分数引入论元信息编码以缓解其可能带来的误差传递;同时,采用门控机制对论元在触发词的水平和垂直方向上的信息进行分解,并根据论元和触发词的相关系数融合两个方向的信息,过滤论元中的噪声;然后,使用前馈网络计算事件提及对共指得分;最后,通过重构事件链验证事件提及的合法性以纠正由共指事件稀疏性带来的模型训练结果偏差。为了验证方法的有效性,本文基于数据集ACE2005进行实验。结果表明,本文模型在端到端事件共指消解任务上具有一定的先进性,其中CoNLL和AVG指标平均高出基线模型5.67%和6.24%。 The event coreference resolution(ECR)is mainly to determine whether different event mentions refer to the same event.ECR not only effectively alleviates the problem of information redundancy in event extraction tasks,but also provides an effective way for event completion.Although many scholars have conducted extensive research on ECR using deep learning methods and achieved significant achievements,there are still issues in most ECR models,such as insufficient explicit information representation,noise introduced by arguments,and sparse distribution of coreference events.Aiming at the above problems,an end-to-end ECR method using explicit argument information and event chain reconstruction was proposed.First,an event extraction model called OneIE was used to extract event triggers and arguments.Then,a Transformer encoder is used to express the context of the event mentions,and the confidence score was introduced into the argument information coding to mitigate the error transmission.Meanwhile,the information of the argument in the horizontal and vertical directions of the trigger was decomposed by the gating mechanism,and the noise of the argument was filtered by fusing the information of the directions according to the correlation coefficient of the argument and the trigger.Afterwards,the coreference score of the event pairs was calculated by the feed forward network.Finally,to verify the validity of the event mentions,the event chains were reconstructed to correct the deviation of the model caused by the sparse event coreference.In order to verify the effectiveness of our method,the proposed model is trained and tested on the public dataset ACE2005.The experimental results showed that our model in end-to-end ECR task is 5.67%and 6.24%higher than the other models in the scores of CoNLL and AVG on average.
作者 刘浏 蒋国权 环志刚 刘姗姗 刘茗 丁鲲 LIU Liu;JIANG Guoquan;HUAN Zhigang;LIU Shanshan;LIU Ming;DING Kun(The Sixty-Third Research Inst.,National Univ.of Defense Technol.,Nanjing 210007,China;School of Info.Eng.,Suqian Univ.,Suqian 223800,China)
出处 《工程科学与技术》 EI CAS CSCD 北大核心 2024年第1期82-88,共7页 Advanced Engineering Sciences
基金 国家自然科学基金项目(71901215) 江苏省“333工程”培养资金资助项目(BRA2020418) 中国博士后科学基金资助项目(2021MD703983) 国防科技大学科研计划项目(ZK20–46) 宿迁市科技计划项目(K202128)。
关键词 事件共指消解 自然语言处理 预训练语言模型 event coreference resolution NLP pre-trained language model
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部