Generative autoencoder to prevent overregularization of variational autoencoder
Citations Over TimeTop 17% of 2024 papers
Abstract
Abstract In machine learning, data scarcity is a common problem, and generative models have the potential to solve it. The variational autoencoder is a generative model that performs variational inference to estimate a low‐dimensional posterior distribution given high‐dimensional data. Specifically, it optimizes the evidence lower bound from regularization and reconstruction terms, but the two terms are imbalanced in general. If the reconstruction error is not sufficiently small to belong to the population, the generative model performance cannot be guaranteed. We propose a generative autoencoder (GAE) that uses an autoencoder to first minimize the reconstruction error and then estimate the distribution using latent vectors mapped onto a lower dimension through the encoder. We compare the Fréchet inception distances scores of the proposed GAE and nine other variational autoencoders on the MNIST, Fashion MNIST, CIFAR10, and SVHN datasets. The proposed GAE consistently outperforms the other methods on the MNIST (44.30), Fashion MNIST (196.34), and SVHN (77.53) datasets.
Related Papers
- → Disentangled Representation Learning with Information Maximizing Autoencoder(2019)1 cited
- → Residual-Recursion Autoencoder for Shape Illustration Images(2020)1 cited
- → An Information Theoretic Approach to the Autoencoder(2019)
- → A Comparison of Deep Learning Architectures for the 3D Generation Data(2022)