Exploiting spatial context constraints for automatic image region annotation
Citations Over TimeTop 10% of 2007 papers
Abstract
In this paper we conduct a relatively complete study on how to exploit spatial context constraints for automated image region annotation. We present a straight forward method to regularize the segmented regions into 2D lattice layout, so that simple grid-structure graphical models can be employed to characterize the spatial dependencies. We show how to represent the spatial context constraints in various graphical models and also present the related learning and inference algorithms. Different from most of the existing work, we specifically investigate how to combine the classification performance of discriminative learning and the representation capability of graphical models. To reliably evaluate the proposed approaches, we create a moderate scale image set with region-level ground truth. The experimental results show that (i) spatial context constraints indeed help for accurate region annotation, (ii) the approaches combining the merits of discriminative learning and context constraints perform best, (iii) image retrieval can benefit from accurate region-level annotation.
Related Papers
- → Deformations, patches, and discriminative models for automatic annotation of medical radiographs(2008)17 cited
- → Adaptive image annotation: refining labels according to contents and relations(2022)1 cited
- → Analysing Word Importance for Image Annotation(2013)
- Hierarchical Image Automatic Annotation Based on Discriminative and Generative Models(2011)
- An Instance-based Method for Automatic Image Semantics Annotation(2009)