Spatial coding for large scale partial-duplicate web image search
Citations Over TimeTop 1% of 2010 papers
Abstract
The state-of-the-art image retrieval approaches represent images with a high dimensional vector of visual words by quantizing local features, such as SIFT, in the descriptor space. The geometric clues among visual words in an image is usually ignored or exploited for full geometric verification, which is computationally expensive. In this paper, we focus on partial-duplicate web image retrieval, and propose a novel scheme, spatial coding, to encode the spatial relationships among local features in an image. Our spatial coding is both efficient and effective to discover false matches of local features between images, and can greatly improve retrieval performance. Experiments in partial-duplicate web image search, using a database of one million images, reveal that our approach achieves a 53% improvement in mean average precision and 46% reduction in time cost over the baseline bag-of-words approach.
Related Papers
- → Word Image Retrieval Using Bag of Visual Words(2012)108 cited
- → Bag-of-visual-words vs global image descriptors on two-stage multimodal retrieval(2011)9 cited
- → Two Strategies for Bag-of-Visual Words Feature Extraction(2018)3 cited
- → Mining the spatial distribution of visual words for scene classification(2016)1 cited
- → A Generalized BoVW Model for Content-Based Image Retrieval(2014)