Unraveling the Contribution of Image Captioning and Neural Machine Translation for Multimodal Machine Translation
The Prague Bulletin of Mathematical Linguistics2017Vol. 108(1), pp. 197–208
Citations Over Time
Abstract
Abstract Recent work on multimodal machine translation has attempted to address the problem of producing target language image descriptions based on both the source language description and the corresponding image. However, existing work has not been conclusive on the contribution of visual information. This paper presents an in-depth study of the problem by examining the differences and complementarities of two related but distinct approaches to this task: textonly neural machine translation and image captioning. We analyse the scope for improvement and the effect of different data and settings to build models for these tasks. We also propose ways of combining these two approaches for improved translation quality.
Related Papers
- → OSCAR and ActivityNet: an Image Captioning model can effectively learn a Video Captioning dataset(2021)1 cited
- → Video Captioning via Hierarchical Reinforcement Learning(2017)22 cited
- → Boosted Attention: Leveraging Human Attention for Image Captioning(2019)1 cited
- → Image Captioning Methodologies Using Deep Learning: A Review(2020)
- → Image Captioning using Neural Networks(2022)