RADMM: Recurrent Adaptive Mixture Model with Applications to Domain Robust Language Modeling
Citations Over TimeTop 10% of 2018 papers
Abstract
We present a new architecture and a training strategy for an adaptive mixture of experts with applications to domain robust language modeling. The proposed model is designed to benefit from the scenario where the training data are available in diverse domains as is the case for YouTube speech recognition. The two core components of our model are an ensemble of parallel long short-term memory (LSTM) expert layers for each domain and another LSTM based network which generates state dependent mixture weights for combining expert LSTM states by linear interpolation. The resulting model is a recurrent adaptive mixture model (RADMM) of domain experts. We train our model on 4.4B words from YouTube speech recognition data. We report results on the YouTube speech recognition test set. Compared with a background LSTM model, we obtain up to 12% relative improvement in perplexity and an improvement in word error rate from 12.3% to 12.1 % while using a lattice rescoring with strong pruning.
Related Papers
- Combination of Recurrent Neural Networks and Factored Language Models for Code-Switching Language Modeling(2013)
- → Improved topic-dependent language modeling using information retrieval techniques(1999)55 cited
- → Verifying the long-range dependency of RNN language models(2016)2 cited
- → When Attention Meets Fast Recurrence: Training Language Models with\n Reduced Compute(2021)6 cited
- → Building Personalized Language Models Through Language Model Interpolation(2023)