Show simple item record

dc.contributor.advisorShrobe, Howard
dc.contributor.authorPei, Yixuan
dc.date.accessioned2022-02-07T15:16:54Z
dc.date.available2022-02-07T15:16:54Z
dc.date.issued2021-09
dc.date.submitted2021-11-03T19:25:25.983Z
dc.identifier.urihttps://hdl.handle.net/1721.1/139980
dc.description.abstractRecent advances in deep learning model architectures have permitted state-of-the-art results in various fields such as NLP and CV. Although these systems have matched and, in some cases, surpassed human performance, many of them are still treated as black boxes, with sometimes unpredictable results. To try and shed some light on behaviors of natural language generation models, we examine the task of procedural text comprehension using neuro-symbolic techniques. We use this task as a testbed for exploring the limitations of state-of-the-art systems such as GPT on the task of predicting the resulting state changes from the text description of a procedure. We also experiment with whether and how symbolic augmentations may help these systems with understanding language. We see some promising results in concept-net knowledge injection, and note that other augmentations provide more natural generations.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAugmenting Transformers for Open Domain Procedural Text Comprehesion
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record