Learning Deep Structure-Preserving Image-Text Embeddings
2016pp. 5005–5013
Citations Over TimeTop 1% of 2016 papers
Abstract
This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a largemargin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and textto-image retrieval. Our method achieves new state-of-theart results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.
Related Papers
- → Children's phrase set for text input method evaluations(2006)28 cited
- → A comparative study on the effectiveness of two song‐teaching methods: holistic vs. phrase‐by‐phrase(2009)9 cited
- → Development of phrase boundary effects(2006)
- The Modern Usage of Rather than + V-ing(2004)
- On the System of Phrase of Chinese(2008)