期刊文献+

融合XLM词语表示的神经机器译文自动评价方法

Neural Automatic Evaluation of Machine Translation Method Combinedwith XLM Word Representation
下载PDF
导出
摘要 机器译文自动评价对机器翻译的发展和应用起着重要的促进作用,其一般通过计算机器译文和人工参考译文的相似度来度量机器译文的质量。该文通过跨语种预训练语言模型XLM将源语言句子、机器译文和人工参考译文映射到相同的语义空间,结合分层注意力和内部注意力提取源语言句子与机器译文、机器译文与人工参考译文以及源语言句子与人工参考译文之间的差异特征,并将其融入基于Bi-LSTM神经译文自动评价方法中。在WMT 19译文自动评价数据集上的实验结果表明,融合XLM词语表示的神经机器译文自动评价方法显著提高了其与人工评价的相关性。 The automatic evaluation of machine translation plays an important role in promoting the development and application of machine translation.It generally measures the quality of machine translation through calculating the similarity between machine translation and its reference.This paper uses the cross-lingual language model XLM to map source sentences,machine translations and references to the same semantic space,and combines layer-wise attention and intra attention to extract the difference features from source sentences and machine translations,machine translations and its references,source sentences and its references,then integrates them into the automatic evaluation method based on Bi-LSTM neural network.The experimental results on the dataset of WMT 19 Metrics task show that the neural automatic evaluation method of machine translation combined with XLM word representation significantly improves its correlation with human judgments.
作者 胡纬 李茂西 裘白莲 王明文 HU Wei;LI Maoxi;QIU Bailian;WANG Mingwen(School of Computer and Information Engineering,Jiangxi Normal University,Nanchang,Jiangxi 330022,China;Center of Modern Edmcation Technology,Jiangxi Open Universrty,Nanchang,Jiangxi 330046,China;Management Science and Engineering Research Center,Jiangxi Normal University,Nanchang,Jiangxi 330022,China)
出处 《中文信息学报》 CSCD 北大核心 2023年第9期46-54,共9页 Journal of Chinese Information Processing
基金 国家自然科学基金(61662031)。
关键词 机器翻译 译文自动评价 跨语种预训练语言模型 差异特征 machine translation automatic evaluation of machine translation cross-lingual pre-trained language model difference features
  • 相关文献

参考文献4

二级参考文献89

  • 1刘挺,李维刚,张宇,李生.复述技术研究综述[J].中文信息学报,2006,20(4):25-32. 被引量:13
  • 2[1]Kishore Papineni et al. BLEU: a Method for Automatic Evaluation of Machine Translation Evaluation [ A]. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics[ C], Philadelphia, Pennsylvania,July 2002,311 - 318,
  • 3[4]George Doddington. Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics [A]. In:Human Language Technology: Notebook Proceedings [ C] .2002,128- 132.
  • 4[5]Christopher Culy and Susanne Z. Riehemann. The Limits of N-Gram Translation Evaluation Metrics [ A]. In: Proceedings of Machine Translation Summit Ⅸ Workshop "Machine Translation for Semitic Languages: Issues and Approaches" [ C], New Orleans, USA, 23-28 September 2003.
  • 5[6]Joseph Turian, Luke Shen, and I. Dan Melamed. Evaluation of Machine Translation and its Evaluation [ A]. In:Proceedings of Machine Translation Summit Ⅸ Workshop "Machine Translation for Semitic Languages: Issues and Approaches" [C], New Orleans, USA, 23-28 September 2003.
  • 6[7]I. Dan Melamed. Automatic Evaluation and Uniform Filter Cascades for Inducing N-best Translation Lexicons [A].In: Proceedings of the Third Workshop on Very Large Corpora [ C]. Boston, MA, 1995.
  • 7[8]Jorg Tiedemann. Combing Clues for Word Alignment [A]. In: Proceedings of the 10th Conference of the European Chapter of the ACL (EACL03) [C], Budapest, Hungary, April 12- 17, 2003.
  • 8C Brew,H S Thompson.Automatic evaluation of computer generated text:A progress report on the TextEval project.Human Language Technology Workshop,Arpa,Isto,1994
  • 9Shiwen Yu.Automatic evaluation of quality for machine translation systems.Machine Translation,1993,8(1/2):117~126
  • 10A Guessoum,R Zantout.Semi-automatic evaluation of the grammatical coverage of machine translation systems.MT Summit Conf,Santiago de Compostela,2001

共引文献44

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部