dc.contributor.advisor | Shrobe, Howard | |
dc.contributor.author | Pei, Yixuan | |
dc.date.accessioned | 2022-02-07T15:16:54Z | |
dc.date.available | 2022-02-07T15:16:54Z | |
dc.date.issued | 2021-09 | |
dc.date.submitted | 2021-11-03T19:25:25.983Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/139980 | |
dc.description.abstract | Recent advances in deep learning model architectures have permitted state-of-the-art results in various fields such as NLP and CV. Although these systems have matched and, in some cases, surpassed human performance, many of them are still treated as black boxes, with sometimes unpredictable results. To try and shed some light on behaviors of natural language generation models, we examine the task of procedural text comprehension using neuro-symbolic techniques. We use this task as a testbed for exploring the limitations of state-of-the-art systems such as GPT on the task of predicting the resulting state changes from the text description of a procedure. We also experiment with whether and how symbolic augmentations may help these systems with understanding language. We see some promising results in concept-net knowledge injection, and note that other augmentations provide more natural generations. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright MIT | |
dc.rights.uri | http://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Augmenting Transformers for Open Domain Procedural Text Comprehesion | |
dc.type | Thesis | |
dc.description.degree | M.Eng. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
mit.thesis.degree | Master | |
thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |