Images Don't Lie
Citations Over TimeTop 10% of 2016 papers
Abstract
Search is at the heart of modern e-commerce. As a result, the task of ranking search results automatically (learning to rank) is a multibillion dollar machine learning problem. Traditional models optimize over a few hand-constructed features based on the item's text. In this paper, we introduce a multimodal learning to rank model that combines these traditional features with visual semantic features transferred from a deep convolutional neural network. In a large scale experiment using data from the online marketplace Etsy, we verify that moving to a multimodal representation significantly improves ranking quality. We show how image features can capture fine-grained style information not available in a text-only representation. In addition, we show concrete examples of how image information can successfully disentangle pairs of highly different items that are ranked similarly by a text-only model.
Related Papers
- Ranking Arguments With Compensation-Based Semantics(2016)
- → Learning Joint Multimodal Representation with Adversarial Attention Networks(2018)19 cited
- → Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning(2023)3 cited
- → Free ranking vs. rank-choosing: New insights on the conjunction fallacy(2021)3 cited
- → Poster Abstract: Representation Learning from Multimodal Sensor Data with Maximally Correlated Autoencoders(2022)