Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade
Citations Over TimeTop 1% of 2017 papers
Abstract
We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and difficulty-aware learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and fast speed.
Related Papers
- → Study of a Nonstationary Separation Method with Gas Centrifuge Cascade(2004)9 cited
- → Further optimization of Q-cascades(2015)10 cited
- → Structural issues in cascade-form adaptive IIR filters(2002)4 cited
- → Design of a 160% pitch passage for cascade experiments using optimization methods(2010)2 cited
- → One-pitch passage designed inversely with a single blade for cascade experiments(2010)1 cited