Show simple item record

dc.contributor.authorMao, Jiayuan
dc.contributor.authorGan, Chuang
dc.contributor.authorKohli, Pushmeet
dc.contributor.authorTenenbaum, Joshua B
dc.contributor.authorWu, Jiajun
dc.date.accessioned2020-08-14T19:37:54Z
dc.date.available2020-08-14T19:37:54Z
dc.date.issued2019-05
dc.date.submitted2018-09
dc.identifier.urihttps://hdl.handle.net/1721.1/126594
dc.description.abstractWe propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.en_US
dc.language.isoen
dc.publisherInternational Conference on Learning Representationsen_US
dc.relation.isversionofhttps://openreview.net/forum?id=rJgMlhRctmen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleThe neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervisionen_US
dc.typeArticleen_US
dc.identifier.citationMao, Jiayuan et al. "The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision." ICLR 2019: 7th International Conference on Learning Representations, May 6-9, 2019, New Orleans, Louisiana: https://openreview.net/forum?id=rJgMlhRctm ©2019 Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.relation.journalICLR 2019: International Conference on Learning Representationsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-10-08T16:06:37Z
dspace.date.submission2019-10-08T16:06:44Z
mit.journal.volume7en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record