Decoupling Word-Pair Distance and Co-occurrence Information for Effective Long History Context Language Modeling
Citations Over TimeTop 18% of 2015 papers
Abstract
In this paper, we propose the use of distance and co-occurrence information of word-pairs to improve language modeling. We have empirically shown that, for history-context sizes of up to ten words, the extracted information about distance and co-occurrence complements the n-gram language model well, for which learning long-history contexts is inherently difficult. Evaluated on the Wall Street Journal and the Switchboard corpora, our proposed model reduces the trigram model perplexity by up to 11.2% and 6.5%, respectively. As compared to the distant bigram model and the trigger model, our proposed model offers a more effective manner of capturing far context information, as verified in terms of perplexity and computational efficiency, i.e., fewer free parameters to be fine-tuned. Experiments using the proposed model for speech recognition, text classification and word prediction tasks showed improved performance.
Related Papers
- → Scalable backoff language models(2002)86 cited
- → N-Gram Accuracy Analysis in the Method of Chatbot Response(2018)13 cited
- → Class phrase models for language modeling(2002)45 cited
- → Decoupling Word-Pair Distance and Co-occurrence Information for Effective Long History Context Language Modeling(2015)2 cited
- Cleaning statistical language models(2010)