A Comparative Study on End-to-End Speech to Text Translation
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)2019pp. 792–799
Citations Over TimeTop 23% of 2019 papers
Abstract
Recent advances in deep learning show that end-to-end speech to text translation model is a promising approach to direct the speech translation field. In this work, we provide an overview of different end-to-end architectures, as well as the usage of an auxiliary connectionist temporal classification (CTC) loss for better convergence. We also investigate on pre-training variants such as initializing different components of a model using pretrained models, and their impact on the final performance, which gives boosts up to 4% in Bleu and 5% in Ter. Our experiments are performed on 270h IWSLT TED-talks En→De, and 100h LibriSpeech Audio-books En→Fr. We also show improvements over the current end-to-end state-of-the-art systems on both tasks.
Related Papers
- → The Volctrans Neural Speech Translation System for IWSLT 2021(2021)8 cited
- → End-to-end Speech Translation via Cross-modal Progressive Training(2021)8 cited
- → Self-Training for End-to-End Speech Translation(2020)4 cited
- → ESPnet-ST IWSLT 2021 Offline Speech Translation System(2021)1 cited
- → The NiuTrans End-to-End Speech Translation System for IWSLT 2021 Offline Task(2021)