Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World
Citations Over TimeTop 1% of 2013 papers
Abstract
This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.
Related Papers
- 다중 사용자 환경에서 Annotation 인터페이스의 설계 및 구현(2002)
- Social Filtering 환경에서 사용자 관심사를 고려한 Annotation 디스플레이 설계 및 구현(2002)
- On the Important Content Characters about Annotation of Xiaojing by Tang Xuan_zong(2005)
- Annotation of Li Shan WenXuan——One Annotation Phenomenon Which is Poles Apart with China Classics Annotation(2006)
- A Review of Annotation of the Pedagogic Colen Corpus(2006)