Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks
Citations Over TimeTop 15% of 2017 papers
Abstract
An important goal of computer vision is to build systems that learn visual representations over time that can be applied to many tasks. In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multitask learning. In particular, the task of visual recognition is aligned to the task of visual question answering by forcing each to use the same word-region embeddings. We show this leads to greater inductive transfer from recognition to VQA than standard multitask learning. Visual recognition also improves, especially for categories that have relatively few recognition training labels but appear often in the VQA setting. Thus, our paper takes a small step towards creating more general vision systems by showing the benefit of interpretable, flexible, and trainable core representations.
Related Papers
- → A Survey on Transfer Learning(2009)22,683 cited
- → Multi-Source Transfer Learning Based on Inductive Knowledge-Leveraged for Medical Datasets(2020)1 cited
- → Transfer Learning and Domain Adaptation for Named-Entity Recognition(2020)1 cited
- → Transfer Learning and Pretrained Models(2023)
- → Research on fault diagnosis model driven by artificial intelligence from domain adaptation to domain generalization(2023)