Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach
Citations Over TimeTop 10% of 2019 papers
Abstract
While neural machine translation (NMT) has achieved remarkable success, NMT systems are prone to make word omission errors. In this work, we propose a contrastive learning approach to reducing word omission errors in NMT. The basic idea is to enable the NMT model to assign a higher probability to a ground-truth translation and a lower probability to an erroneous translation, which is automatically constructed from the ground-truth translation by omitting words. We design different types of negative examples depending on the number of omitted words, word frequency, and part of speech. Experiments on Chinese-to-English, German-to-English, and Russian-to-English translation tasks show that our approach is effective in reducing word omission errors and achieves better translation performance than three baseline methods.
Related Papers
- → Design and Testing of Automatic Machine Translation System Based on Chinese-English Phrase Translation(2021)7 cited
- Statistical Machine Translation System(2009)
- A Hybrid Approach to Example based Machine Translation for Indian Languages(2007)
- Основные факторы улучшения машинного перевода(2015)