Learning mid-level features for recognition
Citations Over TimeTop 1% of 2010 papers
Abstract
Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.
Related Papers
- Survey of Feature Points Detection and Matching using SURF, SIFT and PCA-SIFT(2014)
- → Improvements of Local Descriptor in HOG/SIFT by BOF Approach(2014)5 cited
- → Implementation of Image Matching Algorithm Based on SIFT Features(2014)3 cited
- → A Fully Trainable Network with RNN-based Pooling(2017)1 cited
- → Performance Improvement of SIFT-based Copy-move Forgery Detection Using CSLBP Descriptor(2020)