Learning Cross-Modal Embeddings for Cooking Recipes and Food Images
Author(s)
Salvador, Amaia; Hynes, Nicholas; Aytar, Yusuf; Marin, Javier; Ofli, Ferda; Weber, Ingmar; Torralba, Antonio; ... Show more Show less
DownloadAccepted version (6.850Mb)
Terms of use
Metadata
Show full item recordAbstract
In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general. Code, data and models are publicly available.
Date issued
2017-11Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Salvador, Amaia et al. "Learning Cross-Modal Embeddings for Cooking Recipes and Food Images." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, Honolulu, Hawaii, USA, Institute of Electrical and Electronics Engineers (IEEE), November 2017 © 2017 IEEE
Version: Author's final manuscript
ISBN
9781538604571