Do You See What I Mean? Visual Resolution of Linguistic Ambiguities
Citations Over TimeTop 24% of 2015 papers
Abstract
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception. In this work, we present a novel task for grounded language understanding: disambiguating a sentence given a visual scene which depicts one of the possible interpretations of that sentence. To this end, we introduce a new multimodal corpus containing ambiguous sentences, representing a wide range of syntactic, semantic and discourse ambiguities, coupled with videos that visualize the different interpretations for each sentence. We address this task by extending a vision model which determines if a sentence is depicted by a video. We demonstrate how such a model can be adjusted to recognize different interpretations of the same underlying sentence, allowing to disambiguate sentences in a unified fashion across the different ambiguity types.
Related Papers
- → Coherent Multi-sentence Video Description with Variable Level of Detail(2014)220 cited
- → Uniform Representations for Syntax-Semantics Arbitration(2019)5 cited
- Syntax-semantics interaction in sentence understanding(1995)
- Generative Models of Grounded Language Learning with Ambiguous Supervision(2012)