Multimodal Convolutional Neural Networks for Matching Image and Sentence
Citations Over TimeTop 1% of 2015 papers
Abstract
In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content and one matching CNN modeling the joint representation of image and sentence. The matching CNN composes different semantic fragments from words and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. More specifically, our proposed m-CNNs significantly outperform the state-of-the-art approaches for bidirectional image and sentence retrieval on the Flickr8K and Flickr30K datasets.
Related Papers
- → A memory learning framework for effective image retrieval(2005)110 cited
- → PExy: The Other Side of Exploit Kits(2014)24 cited
- → AEMB: An Automated Exploit Mitigation Bypassing Solution(2021)5 cited
- → Semantic Image Retrieval Based on Multiple-Instance Learning(2010)2 cited
- Research of semantics saving in content based image retrieval based on complex networks(2009)