Sentence alignment is a basic task in natural lan-guage processing which aims to extract high-quality paral-lel sentences automatically.Motivated by the observation that aligned sentence pairs contain a larger number ...Sentence alignment is a basic task in natural lan-guage processing which aims to extract high-quality paral-lel sentences automatically.Motivated by the observation that aligned sentence pairs contain a larger number of aligned words than unaligned ones,we treat word translation as one of the most useful external knowledge.In this paper,we show how to explicitly integrate word translation into neural sentence alignment.Specifically,this paper proposes three cross-lingual encoders to incorporate word translation:1)Mixed Encoder that learns words and their translation annotation vectors over sequences where words and their translations are mixed alterma-tively;2)Factored Encoder that views word translations as fea-tures and encodes words and their translations by concatenating their embeddings;and 3)Gated Encoder that uses gate mechanism to selectively control the amount of word translations moving forward.Experimentation on NIST MT and Opensub-titles Chinese-English datasets on both non-monotonicity and monotonicity scenarios demonstrates that all the proposed encoders significantly improve sentence alignment performance.展开更多
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.61876120,61673290).
文摘Sentence alignment is a basic task in natural lan-guage processing which aims to extract high-quality paral-lel sentences automatically.Motivated by the observation that aligned sentence pairs contain a larger number of aligned words than unaligned ones,we treat word translation as one of the most useful external knowledge.In this paper,we show how to explicitly integrate word translation into neural sentence alignment.Specifically,this paper proposes three cross-lingual encoders to incorporate word translation:1)Mixed Encoder that learns words and their translation annotation vectors over sequences where words and their translations are mixed alterma-tively;2)Factored Encoder that views word translations as fea-tures and encodes words and their translations by concatenating their embeddings;and 3)Gated Encoder that uses gate mechanism to selectively control the amount of word translations moving forward.Experimentation on NIST MT and Opensub-titles Chinese-English datasets on both non-monotonicity and monotonicity scenarios demonstrates that all the proposed encoders significantly improve sentence alignment performance.