Deep Contrast Learning for Salient Object Detection
Citations Over TimeTop 1% of 2016 papers
Abstract
Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art.
Related Papers
- → A improved pooling method for convolutional neural networks(2024)119 cited
- Pooling in high-throughput drug screening.(2009)
- → A fully trainable network with RNN-based pooling(2019)22 cited
- Alpha-Pooling for Convolutional Neural Networks.(2018)
- → A Fully Trainable Network with RNN-based Pooling(2017)1 cited