Learning a Multi-Domain Curriculum for Neural Machine Translation
2020
Citations Over TimeTop 10% of 2020 papers
Abstract
Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain. In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.
Related Papers
- → Instance Weighting for Neural Machine Translation Domain Adaptation(2017)147 cited
- → WUT at SemEval-2019 Task 9: Domain-Adversarial Neural Networks for Domain Adaptation in Suggestion Mining(2019)4 cited
- → Recurrent Stacking of Layers for Compact Neural Machine Translation Models(2018)2 cited
- → Learning under Unknown Bias(2013)
- → DASA: Domain Adaptation via Saliency Augmentation(2023)