CHEF: Cross-modal Hierarchical Embeddings for Food Domain Retrieval
Citations Over TimeTop 18% of 2021 papers
Abstract
Despite the abundance of multi-modal data, such as image-text pairs, there has been little effort in understanding the individual entities and their different roles in the construction of these data instances. In this work, we endeavour to discover the entities and their corresponding importance in cooking recipes automatically as a visual-linguistic association problem. More specifically, we introduce a novel cross-modal learning framework to jointly model the latent representations of images and text in the food image-recipe association and retrieval tasks. This model allows one to discover complex functional and hierarchical relationships between images and text, and among textual parts of a recipe including title, ingredients and cooking instructions. Our experiments show that by making use of efficient tree-structured Long Short-Term Memory as the text encoder in our computational cross-modal retrieval framework, we are not only able to identify the main ingredients and cooking actions in the recipe descriptions without explicit supervision, but we can also learn more meaningful feature representations of food recipes, appropriate for challenging cross-modal retrieval and recipe adaption tasks.
Related Papers
- → Automatic recipe cuisine classification by ingredients(2014)38 cited
- Recipe Recommendation Method by Considering the User's Preference and Ingredient Quantity of Target Recipe(2014)
- → IYASHI Recipe: Cooking Recipe Recommendation for Healing based on Physical Conditions and Human Relations(2021)1 cited
- → A Historical Study on the Changes in the Recipe of Naengmyeon (Korean Cold Noodles) Base on Water - Focus on the Recipe Data Published in Korea from 1800’s to 1980’s -(2011)2 cited
- → Extraction of Cooking Tips from a Recipe and Application to Another Recipe(2020)