On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition
Citations Over TimeTop 10% of 2019 papers
Abstract
In conventional speech recognition, phoneme-based models outperform grapheme-based models for non-phonetic languages such as English. The performance gap between the two typically reduces as the amount of training data is increased. In this work, we examine the impact of the choice of modeling unit for attention-based encoder-decoder models. We conduct experiments on the LibriSpeech 100hr, 460hr, and 960hr tasks, using various target units (phoneme, grapheme, and word-piece); across all tasks, we find that grapheme or word-piece models consistently outperform phoneme-based models, even though they are evaluated without a lexicon or an external language model. We also investigate model complementarity: we find that we can improve WERs by up to 9% relative by rescoring N-best lists generated from a strong word-piece based baseline with either the phoneme or the grapheme model. Rescoring an N-best list generated by the phonemic system, however, provides limited improvements. Further analysis shows that the word-piece-based models produce more diverse N-best hypotheses, and thus lower oracle WERs, than phonemic models.
Related Papers
- → The Independent Nature of Phonotactic Constraints: An Alternative to Syllable-Based Approaches(2003)68 cited
- → Learning novel phonotactics from exposure to continuous speech(2017)10 cited
- A ROLE FOR PHONOTACTIC CONSTRAINTS IN SPEECH PERCEPTION(2015)
- → Does Learning Alternations Affect Phonotactic Judgments?(2016)2 cited
- → Separable effects of neighborhood density and phonotactic probability on word recognition in speech(2000)