Improving N-gram language modeling for code-switching speech recognition
Citations Over TimeTop 10% of 2017 papers
Abstract
Code-switching language modeling is challenging due to statistics of each individual language, as well as statistics of cross-lingual language are insufficient. To compensate for the issue of statistical insufficiency, in this paper we propose a word-class n-gram language modeling approach of which only infrequent words are clustered while most frequent words are treated as singleton classes themselves. We first demonstrate the effectiveness of the proposed method on our English-Mandarin code-switching SEAME data in terms of perplexity. Compared with the conventional word n-gram language models, as well as the word-class n-gram language models of which entire vocabulary words are clustered, the proposed word-class n- gram language modeling approach can yield lower perplexity on our SEAME dev data sets. Additionally, we observed further perplexity reduction by interpolating the word n-gram language models with the proposed word-class n-gram language models. We also attempted to build word-class n-gram language models using third-party text data with our proposed method, and similar perplexity performance improvement was obtained on our SEAME dev data sets when they are interpolated with the word n-gram language models. Finally, to examine the contribution of the proposed language modeling approach to code-switching speech recognition, we conducted lattice based n-best rescoring.
Related Papers
- → Scalable backoff language models(2002)86 cited
- → Algorithms for bigram and trigram word clustering(1995)21 cited
- → An automatic technique to include grammatical and morphological information in a trigram-based statistical language model(1992)22 cited
- → Detection of Dialogue Acts Using Perplexity-Based Word Clustering(2007)
- → A kind of hybrid language model employed in post processing of chinese speech recognition(1998)