Understanding language through visual imagination
Author(s)
Mao, Cheahuychou.
Download1145123680-MIT.pdf (3.938Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Boris Katz.
Terms of use
Metadata
Show full item recordAbstract
This thesis introduces a multimodal approach to natural language understanding by presenting a generative language-vision model that can generate videos for sentences and a comprehensive approach for using this capability to solve natural language inference, video captioning and video completion without task-specific training. The only training required is for acquiring a lexicon from captioned videos similar to the way children learn language through exposure to perceptual cues. The model generates videos by sampling the visual features of objects described in the target sentences over time. The evaluation results show that the model can reliably generate videos for sentences describing multiple concurrent and sequential actions, and that the ability to reason about language using visual scenes enables language tasks to be reduced to vision tasks and be solved more robustly using information obtained via vision.
Description
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 57-60).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.