On factuality in neural language models
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
In the past several years, language modeling has made significant advances on artificial benchmarks. However, despite these advancements, language models still face significant issues when deployed in real-world settings. In particular, these models tend to hallucinate facts and demonstrate significant harmful societal biases that render them impractical in the real-world. This thesis introduces datasets, models, and methodologies for studying how language models incorporate world factuality into their decision making processes. First, I study how neural language models can be used to prove or disprove facts. Motivated by the results, I subsequently study how the choice of training tasks affects the stance detection model. In order to study the acquisition of harmful knowledge, I build a dataset to probe models for their societal stereotypes. Finally, I extend this evaluation to a generative setting, and study how the choice of sampling algorithm affects model factuality. Taken together, this thesis provides a comprehensive analysis of how language models capture world factuality via the pre-training process.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021Cataloged from the official PDF of thesis.Includes bibliographical references (pages 107-108).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.