Scaling Up Vision-Language Pretraining for Image Captioning
Citations Over TimeTop 10% of 2022 papers
Abstract
In recent years, we have witnessed significant performance boost in the image captioning task based on vision-language pre-training (VLP). Scale is believed to be an important factor for this advance. However, most existing work only focuses on pre-training transformers with moderate sizes (e.g., 12 or 24 layers) on roughly 4 million images. In this paper, we present LEMON O, a LargE-scale iMage captiONer, and provide the first empirical study on the scaling behavior of VLP for image captioning. We use the state-of-the-art Vin VL model as our reference model, which consists of an image feature extractor and a transformer model, and scale the transformer both up and down, with model sizes ranging from 13 to 675 million parameters. In terms of data, we conduct experiments with up to 200 million imagetext pairs which are automatically collected from web based on the alt attribute of the image (dubbed as ALT200M 1 1 The dataset is released at https://github.com/xiaoweihu/ALT200M). Extensive analysis helps to characterize the performance trend as the model size and the pre-training data size increase. We also compare different training recipes, especially for training on large-scale noisy data. As a result, LEMON achieves new state of the arts on several major image captioning benchmarks, including COCO Caption, nocaps, and Conceptual Captions. We also show LEMON can generate captions with long-tail vi-sual concepts when used in a zero-shot manner.
Related Papers
- → Reasoning like Humans: On Dynamic Attention Prior in Image Captioning(2021)19 cited
- → Region Driven Remote Sensing Image Captioning(2019)15 cited
- → Cross-domain personalized image captioning(2019)8 cited
- → VATEX2020: pLSTM framework for video captioning(2023)6 cited
- → Automatic Video Captioning via Multi-channel Sequential Encoding(2016)3 cited