Show simple item record

dc.contributor.advisorBoris Katz.en_US
dc.contributor.authorWang, Christopher Z.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2021-01-06T18:32:39Z
dc.date.available2021-01-06T18:32:39Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/129173
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020en_US
dc.descriptionCataloged from student-submitted PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 51-55).en_US
dc.description.abstractWe train a semantic parser to discover latent meaning representations that can be used for the execution of natural language commands. We focus on the difficult domain of commands with a temporal aspect, whose semantics we capture using formulas in Linear Temporal Logic (LTL). Our parser learns using weak supervision, and does not have access to any ground truth representations during training; only execution demonstrations for each input command are observed. At training time, the parser hypothesizes a candidate formula for each input and updates its parameters according to the feedback of an executor. Our executor, a pre-trained planner, assigns a score to a hypothesis based on the likelihood of the corresponding execution traces. Three competing pressures allow our parser to learn. First, any hypothesized meaning for a sentence must be general enough to permit all behavior in the observed execution trajectories. Second, the executor rewards formulas which are more likely to have induced the specific behavior observed in the trajectories. Finally, a generator component, which reconstructs the input, encourages the model to conserve knowledge about the original command. On both human and machine generated sentences, we find that our weakly supervised approach performs at rates comparable to a fully supervised model when it comes to finding formulas that both reflect the observed demonstrations and elicit the correct executions from the planner.en_US
dc.description.statementofresponsibilityby Christopher Z. Wang.en_US
dc.format.extent55 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleWeakly supervised semantic parsing for Linear Temporal Logicen_US
dc.title.alternativeWeakly supervised semantic parsing for LTLen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1227277980en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2021-01-06T18:32:37Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record