期刊文献+

融合实体信息和时序特征的问答式事件检测方法

Question answering-based event detection method fusing entity information and temporal feature
下载PDF
导出
摘要 针对现有问答方法在处理触发词歧义性问题上的不足,提出一种融合实体信息和时序特征的问答式事件检测方法EDQA-EITF。构建一种基于RoBERTa的问答框架,增强模型的语义表示能力;通过在模型输入序列中显示地添加实体、实体类型等先验信息,进一步帮助模型根据句子的上下文语境对触发词进行分类;采用最小门控循环单元(minimal gated unit,MGU)和Transformer编码器对输入序列中的时序依赖关系进行建模,提升模型对于句子的语义关系、句法结构的阅读与理解能力。公共数据集上的实验结果表明,所提方法在进行事件检测时具有更优的性能,有效缓解了触发词的歧义性问题。 Aiming at the shortcomings of existing question answering methods on the problem of trigger ambiguity,a question answering-based event detection method fusing entity information and temporal feature named EDQA-EITF was proposed.A question answering framework based on RoBERTa was constructed to enhance the model’s semantic representation ability.The priori information such as entities and entity types was added to the model input sequence in a displayed way,to further help the model classify the triggers based on the contextual semantic environment of the sentence.The minimal gated unit(MGU)and Transformer encoder were used to model the temporal dependencies in the input sequence,which improved the model’s ability to read and understand the semantic relation and syntactic structure of sentence.Experimental results on public dataset show that the proposed method has better performance in event detection and effectively alleviates the problem of trigger ambiguity.
作者 马宇航 宋宝燕 丁琳琳 鲁闻一 纪婉婷 MA Yu-hang;SONG Bao-yan;DING Lin-lin;LU Wen-yi;JI Wan-ting(College of Information,Liaoning University,Shenyang 110036,China)
出处 《计算机工程与设计》 北大核心 2024年第4期1218-1224,共7页 Computer Engineering and Design
基金 辽宁省应用基础研究计划基金项目(2022JH2/101300250) 国家自然科学基金项目(62072220) 辽宁省中央引导地方科技发展基金项目(2022JH6/100100032) 辽宁省自然科学基金项目(2022-KF-13-06)。
关键词 事件检测 问答 RoBERTa 时序特征 先验信息 最小门控单元 TRANSFORMER event detection question answering RoBERTa temporal feature prior information minimal gated unit Transformer
  • 相关文献

参考文献5

二级参考文献42

  • 1Y. LeCun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the 1EEE, vol. 86, no. 11, pp. 2278-2324, 1998.
  • 2A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet clas- sification with deep convolutional neural networks. In Pro- ceedings of Advances in Neural Information Processing Sys- tems 25, NIPS, Lake Tahoe, Nevada, USA, pp. 1091105, 2012.
  • 3K. Cho, B. van Merinboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio. Learning phrase repre- sentations using RNN encoder-decoder for statistical ma- chine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Doha, Qatar, pp. 1721734, 2014.
  • 4I. Sutskever, O. Vinyals, Q. V. Le. Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems 27, NIPS, Mon- treal, Canada, pp. 3104-3112, 2014.
  • 5D. Bahdanau, K. Cho, Y. Bengio. Neural machine transla- tion by jointly learning to align and translate. In Interna- tional Conference on Learning Representations 2015, San Diego, USA, 2015.
  • 6A. Graves, A. R. Mohamed, G. Hinton. Speech recogni- tion with deep recurrent neural networks. In Proceedings of International Conference on Acoustics, Speech and Sig- nal Processing, IEEE, Vancouver, Canada, pp. 6645-6649, 2013.
  • 7K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. S. Zemel, Y. Bengio. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of the 32nd International Conference on Machine Learning, Lille, prance, vol. 37, pp. 2048 2057, 2015.
  • 8A. Karpathy, F. F. Li. Deep visual-semantic alignments for generating image descriptions. In Proceedings of IEEE In- ternational Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 3128 3137, 2015.
  • 9R. Lebret, P. O. Pinheiro, R. Collobert. Phrase-based im- age captioning. In Proceedings of the 32nd International Conference on Machine Learning, Lille, Prance, voh 37, pp. 2085 2094, 2015.
  • 10J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, T. Darrell. Long-term recurrent convolutional networks for visual recognition and descrip- tion. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 2625-2634, 2015.

共引文献49

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部