Face Generation and Editing With StyleGAN: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence2024Vol. 46(5), pp. 3557–3576
Citations Over TimeTop 1% of 2024 papers
Andrew Melnik, Maksim Miasayedzenkau, Dzianis Makaravets, Dzianis Pirshtuk, Eren Akbulut, Dennis Holzmann, Tarek Renusch, Gustav Reichert, Helge Ritter
Abstract
Our goal with this survey is to provide an overview of the state of the art deep learning methods for face generation and editing using StyleGAN. The survey covers the evolution of StyleGAN, from PGGAN to StyleGAN3, and explores relevant topics such as suitable metrics for training, different latent representations, GAN inversion to latent spaces of StyleGAN, face image editing, cross-domain face stylization, face restoration, and even Deepfake applications. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
Related Papers
- → Deep fake Detection Through Deep Learning(2023)4 cited
- Why & When Deep Learning Works: Looking Inside Deep Learnings.(2017)
- → SpaceEdit: Learning a Unified Editing Space for Open-Domain Image Editing(2021)
- → CCR: Facial Image Editing with Continuity, Consistency and Reversibility(2022)
- → Disentangled Image Attribute Editing in Latent Space via Mask-Based Retention Loss(2022)