Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
Citations Over TimeTop 10% of 2016 papers
Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.
Related Papers
- → Learning Intelligent Dialogs for Bounding Box Annotation(2018)40 cited
- → Syncretic-NMS: A Merging Non-Maximum Suppression Algorithm for Instance Segmentation(2020)30 cited
- → Bounding-box Centralization for Improving SiamFC++(2021)1 cited
- → Learning Intelligent Dialogs for Bounding Box Annotation(2017)8 cited