Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning
2018pp. 2556–2565
Citations Over TimeTop 1% of 2018 papers
Abstract
We present a new dataset of image caption annotations, Conceptual Captions, which contains an order of magnitude more images than the MS-COCO dataset We achieve this by extracting and filtering image caption annotations from billions of webpages. We also present quantitative evaluations of a number of image captioning models and show that a model architecture based on Inception- ResNet-v2 (Szegedy et al., 2016) for image-feature extraction and Transformer
Related Papers
- → An Integrative Review of Image Captioning Research(2021)17 cited
- → OSCAR and ActivityNet: an Image Captioning model can effectively learn a Video Captioning dataset(2021)1 cited
- → Comprehensive Comparative Study on Several Image Captioning Techniques Based on Deep Learning Algorithm(2021)1 cited
- → CIC Chinese Image Captioning Based on Image Label Information(2021)
- → Image Captioning using Neural Networks(2022)