Tagged Back-translation Revisited: Why Does It Really Work?
Citations Over TimeTop 10% of 2020 papers
Abstract
In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts. Such NMT systems better translate humanproduced translations, i.e., translationese, but may largely worsen the translation quality of original texts. Our analysis reveals that adding a simple tag to back-translations prevents this quality degradation and improves on average the overall translation quality by helping the NMT system to distinguish back-translated data from original parallel data during training. We also show that, in contrast to high-resource configurations, NMT systems trained in lowresource settings are much less vulnerable to overfit back-translations. We conclude that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown.
Related Papers
- → Using the Strongest Adversarial Example to Alleviate Robust Overfitting(2022)1 cited
- → Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models(2022)3 cited
- → English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor(2017)2 cited
- → Recurrent Stacking of Layers for Compact Neural Machine Translation Models(2018)2 cited
- → A Note on Generalization in Variational Autoencoders: How Effective Is Synthetic Data & Overparameterization?(2023)