Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
Citations Over TimeTop 1% of 2016 papers
Abstract
This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controlling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthesizing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.
Related Papers
- → Auxiliary Deep Generative Models(2016)154 cited
- → Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet(2023)9 cited
- → Texture synthesis using image pyramids and self-organizing maps(2002)5 cited
- → Generative Model for Person Re-Identification: A Review(2020)
- → TC-VAE: Uncovering Out-of-Distribution Data Generative Factors(2023)