Generating Sentences from a Continuous Space
Citations Over TimeTop 1% of 2016 papers
Abstract
The standard recurrent neural network language model (rnnlm) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an rnn-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling. but now , as they parked out front and owen stepped out of the car , he could see True: that the transition was complete . RNNLM: it , " i said . VAE: through the driver 's door . you kill him and his True: men .
Related Papers
- → Long Short-Term Memory(1997)94,983 cited
- → GAN(Generative Adversarial Nets)(2017)21,735 cited
- beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework(2017)
- → Stochastic Backpropagation and Approximate Inference in Deep Generative Models(2014)2,647 cited