Context dependent recurrent neural network language model
Citations Over TimeTop 1% of 2012 papers
Abstract
Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.
Related Papers
- → Analisis Topik Modelling Terhadap Penggunaan Sosial Media Twitter oleh Pejabat Negara(2021)14 cited
- → Improving Language Modeling using Densely Connected Recurrent Neural Networks(2017)3 cited
- → Regularizing and Optimizing LSTM Language Models(2017)468 cited
- → Improving Language Modeling using Densely Connected Recurrent Neural\n Networks(2017)