Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/138346.2

Show simple item record

dc.contributor.authorHan, C
dc.contributor.authorMao, J
dc.contributor.authorGan, C
dc.contributor.authorTenenbaum, JB
dc.contributor.authorWu, J
dc.date.accessioned2021-12-07T14:21:16Z
dc.date.available2021-12-07T14:21:16Z
dc.date.issued2019-01-01
dc.identifier.urihttps://hdl.handle.net/1721.1/138346
dc.description.abstract© 2019 Neural information processing systems foundation. All rights reserved. Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i.e., the color). In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects, since they both categorize the shape of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.en_US
dc.language.isoen
dc.relation.isversionofhttps://papers.nips.cc/paper/2019/hash/98d8a23fd60826a2a474c5b4f5811707-Abstract.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleVisual concept-metaconcept learningen_US
dc.typeArticleen_US
dc.identifier.citationHan, C, Mao, J, Gan, C, Tenenbaum, JB and Wu, J. 2019. "Visual concept-metaconcept learning." Advances in Neural Information Processing Systems, 32.
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-12-07T14:19:09Z
dspace.orderedauthorsHan, C; Mao, J; Gan, C; Tenenbaum, JB; Wu, Jen_US
dspace.date.submission2021-12-07T14:19:11Z
mit.journal.volume32en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version