On factuality in neural language models
Author(s)
Nadeem, Moin.
Download1251800584-MIT.pdf (2.522Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
James Glass.
Terms of use
Metadata
Show full item recordAbstract
In the past several years, language modeling has made significant advances on artificial benchmarks. However, despite these advancements, language models still face significant issues when deployed in real-world settings. In particular, these models tend to hallucinate facts and demonstrate significant harmful societal biases that render them impractical in the real-world. This thesis introduces datasets, models, and methodologies for studying how language models incorporate world factuality into their decision making processes. First, I study how neural language models can be used to prove or disprove facts. Motivated by the results, I subsequently study how the choice of training tasks affects the stance detection model. In order to study the acquisition of harmful knowledge, I build a dataset to probe models for their societal stereotypes. Finally, I extend this evaluation to a generative setting, and study how the choice of sampling algorithm affects model factuality. Taken together, this thesis provides a comprehensive analysis of how language models capture world factuality via the pre-training process.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 107-108).
Date issued
2021Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.