摘要
当前大多数机器翻译模型都属于自回归模型,不支持解码器并行生成翻译结果,且生成速率过低。针对当前自回归模型中存在的问题,基于Transformer和非自回归Transformer(non-autoregressive Transformer,NAT)的翻译模型进行实验,对蒙汉语料进行知识蒸馏和语跨语言词语嵌入的处理。实验结果表明,引入知识蒸馏的NAT模型在BLEU值方面有显著提升,同时也提高了模型生成速率。NAT模型进行知识蒸馏与跨语言词嵌入处理后能显著减少源语言和目标语言之间的依赖关系,提高蒙汉机器翻译的BLEU值,相比Transformer模型,BLEU值提高了2.8,时间消耗减少了19.34 h。
Most machine translation models are autoregressive models.To solve the problems existing in the autoregressive models,this paper conducts knowledge distillation and cross-language word embedding on corpus,on the translation models of Transformer and non-autoregressive Transformer.Experimental results show that non-autoregressive Transformer models with knowledge distillation achieve significant improvements in terms of BLEU and improve the generation rate.Experimental results show that knowledge distillation can significantly reduce the dependence between source and target language and improve the BLEU of Mongolian-Chinese machine translation,compared to Transformer models,BLEU values are improved by 2.8 and time consumption is reduced by 19.34 hours.
作者
赵旭
苏依拉
仁庆道尔吉
石宝
ZHAO Xu;SU Yila;RENQING Dao’erji;SHI Bao(College of Information Engineering,Inner Mongolia University of Technology,Hohhot 010080,China)
出处
《计算机工程与应用》
CSCD
北大核心
2022年第12期310-316,共7页
Computer Engineering and Applications
基金
国家自然科学基金(61966028,61966027)。
关键词
Transformer模型
NAT模型
知识蒸馏
跨语言词嵌入
Transformer models
non-autoregressive Transformer(NAT)
knowledge distillation
cross-language word embedding