Image as a Foreign Language: BEIT Pretraining for Vision and Vision-Language Tasks
Citations Over TimeTop 1% of 2023 papers
Abstract
A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a general-purpose multimodal foundation model BEIT-3, which achieves excellent transfer performance on both vision and vision-language tasks. Specifically, we advance the big convergence from three aspects: backbone architecture, pretraining task, and model scaling up. We use Multiway Transformers for general-purpose modeling, where the modular architecture enables both deep fusion and modality-specific encoding. Based on the shared backbone, we perform masked “language” modeling on images (Imglish), texts (English), and image-text pairs (“parallel sentences”) in a unified manner. Experimental results show that BEIT-3 obtains remarkable performance on object detection (COCO), semantic segmentation (ADE20K), image classification (ImageNet), visual reasoning (NLVR2), visual question answering (VQAv2), image captioning (COCO), and cross-modal retrieval (Flickr30K, COCO).
Related Papers
- → A Review on Applications of Machine Vision Systems in Industries(2016)29 cited
- → An Experimental Machine Vision Prototyper for the Inspection of Planar Webs(2005)2 cited
- → On the architecture of the micro machine vision system(2006)
- → Image processing in industry. 6. Machine vision. 1. Machine vision for assembly robots.(1987)
- Application of Machine Vision System in GDX2 Packing Machine(2005)