StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
Citations Over TimeTop 1% of 2017 papers
Abstract
Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.
Related Papers
- → Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks(2017)21,465 cited
- → Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network(2017)12,112 cited
- → AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks(2018)1,864 cited
- → GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium(2017)4,490 cited