Large Kernel Matters — Improve Semantic Segmentation by Global Convolutional Network
Citations Over TimeTop 1% of 2017 papers
Abstract
One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1×1 or 3×3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.
Related Papers
- → Deep CNN Ensemble with Data Augmentation for Object Detection(2015)46 cited
- → Beyond ALBE/P: Language neutral form(1981)4 cited
- → PASCAL Users' forum(1979)
- → Teaching simulation with Pascal_SIM(1989)
- A Pascal-Oriented Program Maintenance Language and Its Application(1991)