A Convolutional Encoder Model for Neural Machine Translation
Citations Over TimeTop 1% of 2017 papers
Abstract
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and on WMT'15 English-German we outperform several recently published results. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. We speed up CPU decoding by more than two times at the same or higher accuracy as a strong bidirectional LSTM. 1
Related Papers
- → New developments on the Encyclopedia of DNA Elements (ENCODE) data portal(2019)1,073 cited
- → Using the ENCODE Resource for Functional Annotation of Genetic Variants(2015)32 cited
- → Extrapolating ENCODE data to the whole human genome(2008)2 cited
- The Study of OCDMA Encode/Decode Using Multilayer Films(2001)
- The ENCODE Project Decoded(2012)