Probabilistic Embeddings for Cross-Modal Retrieval
Citations Over TimeTop 1% of 2021 papers
Abstract
Cross-modal retrieval methods build a common representation space for samples from multiple modalities, typically from the vision and the language domains. For images and their captions, the multiplicity of the correspondences makes the task particularly challenging. Given an image (respectively a caption), there are multiple captions (respectively images) that equally make sense. In this paper, we argue that deterministic functions are not sufficiently powerful to capture such one-to-many correspondences. Instead, we propose to use Probabilistic Cross-Modal Embedding (PCME), where samples from the different modalities are represented as probabilistic distributions in the common embedding space. Since common benchmarks such as COCO suffer from non-exhaustive annotations for cross-modal matches, we propose to additionally evaluate retrieval on the CUB dataset, a smaller yet clean database where all possible image-caption pairs are annotated. We extensively ablate PCME and demonstrate that it not only improves the retrieval performance over its deterministic counterpart but also provides uncertainty estimates that render the embeddings more interpretable. Code is available at https://github.com/naver-ai/pcme.
Related Papers
- → Embedding as a modeling problem(1998)167 cited
- → Interruptions as multimodal outputs: which are the less disruptive?(2003)69 cited
- → Some Issues on Choices of Modalities for Multimodal Biometric Systems(2014)1 cited
- → The Effect of Teaching Practical Physical Modalities on the Ordering Skills of Physical Medicine and Rehabilitation Residents(2013)
- → SHAPE: An Unified Approach to Evaluate the Contribution and Cooperation of Individual Modalities(2022)