Training Deeper Neural Machine Translation Models with Transparent Attention
2018pp. 3028–3033
Citations Over TimeTop 1% of 2018 papers
Abstract
While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT'14 English-German and WMT'15 Czech-English tasks for both architectures.
Related Papers
- → English to Chinese Translation of Prepositions(2005)7 cited
- → Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation(2020)5 cited
- → Research on Improving Personalized Recommendation Accuracy Based on NLP Semantic Analysis(2023)1 cited
- → Exploiting the Translation Context for Multilingual WSD(2006)5 cited
- Exploiting the Translation Context for Multilingual WSD(2008)