Learning to Parse and Translate Improves Neural Machine Translation
2017pp. 72–78
Citations Over TimeTop 1% of 2017 papers
Abstract
There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.
Related Papers
- → An Approach for Efficient Machine Translation Using Translation Memory(2016)4 cited
- → Systematic Processing of Long Sentences in Rule Based Portuguese-Chinese Machine Translation(2010)9 cited
- → Parse and Corpus-Based Machine Translation(2012)5 cited
- → Syntax Augmented Inversion Transduction Grammars for Machine Translation(2010)1 cited
- → Automatic Post-Editing Method Using Translation Knowledge Based on Intuitive Common Parts Continuum for Statistical Machine Translation(2014)