Pyramid Scene Parsing Network
Citations Over TimeTop 1% of 2017 papers
Abstract
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
Related Papers
- → Region average pooling for context-aware object detection(2017)19 cited
- → Systematic Processing of Long Sentences in Rule Based Portuguese-Chinese Machine Translation(2010)9 cited
- → Feature-level fusion of convolutional neural networks for visual object classification(2016)4 cited
- → A Hybrid Approach to Parsing Natural Languages(2016)1 cited
- → Object detection using improved YOLOv3-tiny based on pyramid pooling(2021)1 cited