Tree-to-Sequence Attentional Neural Machine Translation
2016pp. 823–833
Citations Over TimeTop 1% of 2016 papers
Abstract
Most of the existing Neural Machine Translation (NMT) models focus on the conversion of sequential data and do not directly use syntactic information. We propose a novel end-to-end syntactic NMT model, extending a sequenceto-sequence model with the source-side phrase structure. Our model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. Experimental results on the WAT'15 Englishto-Japanese dataset demonstrate that our proposed model considerably outperforms sequence-to-sequence attentional NMT models and compares favorably with the state-of-the-art tree-to-string SMT system.
Related Papers
- → A Novel Hybrid Approach to Improve Neural Machine Translation Decoding using Phrase-Based Statistical Machine Translation(2021)12 cited
- Improving Neural Machine Translation through Phrase-based Forced Decoding(2017)
- → Children's phrase set for text input method evaluations(2006)28 cited
- Phrase-based Chinese Mongolian statistical machine translation(2010)
- → Phrase-Based Statistical Machine Translation by Using Reordering Search and Additional Features(2006)