Interactive Sketch & Fill: Multiclass Sketch-to-Image Translation
2019pp. 1171–1180
Citations Over TimeTop 10% of 2019 papers
Arnab Ghosh, Richard Zhang, Puneet K. Dokania, Oliver Wang, Alexei A. Efros, Philip H. S. Torr, Eli Shechtman
Abstract
We propose an interactive GAN-based sketch-to-image translation method that helps novice users easily create images of simple objects. The user starts with a sparse sketch and a desired object category, and the network then recommends its plausible completion(s) and shows a corresponding synthesized image. This enables a feedback loop, where the user can edit the sketch based on the network's recommendations, while the network is able to better synthesize the image that the user might have in mind. In order to use a single model for a wide array of object classes, we introduce a gating-based approach for class conditioning, which allows us to generate distinct classes without feature mixing, from a single generator network.
Related Papers
- → Human's Scene Sketch Understanding(2016)18 cited
- Automatically transforming symbolic shape descriptions for use in sketch recognition(2004)
- → Enabling instructors to develop sketch recognition applications for the classroom(2007)5 cited
- → EUROGRAPHICS Tutorial on Sketch Recognition(2009)2 cited
- → Sketch Recognition using Domain Classification(2012)