XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale
Citations Over TimeTop 1% of 2022 papers
Abstract
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work.Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource.On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English.For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average.XLS-R also sets a new state of the art on VoxLin-gua107 language identification.Moreover, we show that with sufficient model size, cross-lingual pretraining can perform as well as English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining.We hope XLS-R can help to improve speech processing tasks for many more languages of the world.Models and code are available at www.github.
Related Papers
- → Women’s Representation in Argentine National and Subnational Governments(2018)24 cited
- → Elections and Representation in Local Government: A Victorian Case Study(2004)22 cited
- → Prelude to a Theory of Musical Representation(2017)1 cited
- → Popular Representation from Above: On Recognizing the Distance Paradox(2021)
- → Female Strategies of Self-representation in Variants, Versions, and Revisions of “Bluebeard”(2000)