How Hard Can It Be? Estimating the Difficulty of Visual Search in an Image
Citations Over TimeTop 10% of 2016 papers
Abstract
We address the problem of estimating image difficulty defined as the human response time for solving a visual search task. We collect human annotations of image difficulty for the PASCAL VOC 2012 data set through a crowd-sourcing platform. We then analyze what human interpretable image properties can have an impact on visual search difficulty, and how accurate are those properties for predicting difficulty. Next, we build a regression model based on deep features learned with state of the art convolutional neural networks and show better results for predicting the ground-truth visual search difficulty scores produced by human annotators. Our model is able to correctly rank about 75% image pairs according to their difficulty score. We also show that our difficulty predictor generalizes well to new classes not seen during training. Finally, we demonstrate that our predicted difficulty scores are useful for weakly supervised object localization (8% improvement) and semi-supervised object classification (1% improvement).
Related Papers
- → Incremental maintenance of C-Rank scores in dynamic web environment(2017)1 cited
- → Deep CNN Ensemble with Data Augmentation for Object Detection(2015)46 cited
- → PASCAL Users' forum(1979)
- → Tackling the Problem of Limited Data and Annotations in Semantic Segmentation(2020)
- → Tackling the Problem of Limited Data and Annotations in Semantic\n Segmentation(2020)