摘要
基于Transformer的序列转换模型是当前性能最优的机器翻译模型之一。该模型在生成机器译文时,通常从左到右逐个生成目标词,这使得当前位置词的生成不能利用译文中该词之后未生成词的信息,导致机器译文解码不充分从而降低译文质量。为了缓解上述问题,该文提出了基于重解码的神经机器翻译模型,该模型将已生成的机器译文作为目标语言近似上下文环境,对译文中每个词依次进行重解码,重解码时Transformer解码器中遮挡多头注意力仅遮挡已生成译文中的当前位置词,因此,重生成的每个词都能充分利用目标语言的上下文信息。在多个WMT机器翻译评测任务测试集上的实验结果表明:使用基于重解码的神经机器翻译方法显著提高了机器译文质量。
The Transformer is one of the best performing machine translation models.Generating tokens one by one from left to right,this approach lacks the guidance of future contextual information.To alleviate this issue,we propose a neural machine translation model based on re-decoding.The model treats the generated machine translation outputs as approximate contextual environment of the target language,and then re-decodes each token in the machine translation output successively.The masked multi-head attention of the Transformer decoder only masks the current position token in the generated translation output.As a result,every token re-decoded can make full use of its contextual information.Experimental results on several test sets from the WMT show that the quality of machine translation is improved significantly by leveraging the re-decoding.
作者
宗勤勤
李茂西
ZONG Qinqin;LI Maoxi(School of Computer and Information Engineering,Jiangxi Normal University,Nanchang,Jiangxi 330022,China)
出处
《中文信息学报》
CSCD
北大核心
2021年第6期39-46,共8页
Journal of Chinese Information Processing
基金
国家自然科学基金(61662031,61462044)。