BabyTalk: Understanding and Generating Simple Image Descriptions
Citations Over TimeTop 1% of 2013 papers
Abstract
We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.
Related Papers
- → Evaluation in the context of natural language generation(1998)67 cited
- Context-aware Natural Language Generation for Spoken Dialogue Systems.(2016)
- Evaluation in Natural Language Generation: Lessons from Referring Expression Generation(2007)
- Affective Natural Language Generation(1999)
- → Learning from limited datasets: Implications for Natural Language Generation and Human-Robot Interaction(2018)2 cited