Show simple item record

dc.contributor.advisorBoris Katz.en_US
dc.contributor.authorMao, Cheahuychou.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-03-24T15:36:43Z
dc.date.available2020-03-24T15:36:43Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/124257
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 57-60).en_US
dc.description.abstractThis thesis introduces a multimodal approach to natural language understanding by presenting a generative language-vision model that can generate videos for sentences and a comprehensive approach for using this capability to solve natural language inference, video captioning and video completion without task-specific training. The only training required is for acquiring a lexicon from captioned videos similar to the way children learn language through exposure to perceptual cues. The model generates videos by sampling the visual features of objects described in the target sentences over time. The evaluation results show that the model can reliably generate videos for sentences describing multiple concurrent and sequential actions, and that the ability to reason about language using visual scenes enables language tasks to be reduced to vision tasks and be solved more robustly using information obtained via vision.en_US
dc.description.statementofresponsibilityby Cheahuychou Mao.en_US
dc.format.extent60 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleUnderstanding language through visual imaginationen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1145123680en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-03-24T15:36:42Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record