Learning Localized Perceptual Similarity Metrics for Interactive Categorization
Citations Over TimeTop 10% of 2015 papers
Abstract
Current similarity-based approaches to interactive fine grained categorization rely on learning metrics from holistic perceptual measurements of similarity between objects or images. However, making a single judgment of similarity at the object level can be a difficult or overwhelming task for the human user to perform. Secondly, a single general metric of similarity may not be able to adequately capture the minute differences that discriminate fine-grained categories. In this work, we propose a novel approach to interactive categorization that leverages multiple perceptual similarity metrics learned from localized and roughly aligned regions across images, reporting state-of-the-art results and outperforming methods that use a single nonlocalized similarity metric.
Related Papers
- → Disentangling diagnostic object properties for human scene categorization(2023)13 cited
- → Blind to Object Changes: When Learning the Same Object at Different Levels of Categorization Modifies Its Perception(1999)64 cited
- → The fickle nature of similarity change as a result of categorization(2014)10 cited
- → Categorization and Similarity Models(2001)7 cited
- → Is one object enough? Diagnosticity of single objects for fast scene categorization(2022)