ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation
Citations Over TimeTop 1% of 2016 papers
Abstract
Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most userfriendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCALCONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http://research.microsoft.com/en-us/um/ people/jifdai/downloads/scribble_sup.
Related Papers
- → Pixel selection based on discriminant features with application to face recognition(2012)20 cited
- → Pixel selection in a face image based on discriminant features for face recognition(2008)13 cited
- → Feature-level fusion of convolutional neural networks for visual object classification(2016)4 cited
- → StuffNet: Using 'Stuff' to Improve Object Detection(2016)1 cited