A Case for Pre-trained Language Models in Systems Engineering
Author(s)
Lim, Shao Cong![Thumbnail](/bitstream/handle/1721.1/147405/lim-limsc-sm-sdm-2022-thesis.pdf.jpg?sequence=3&isAllowed=y)
DownloadThesis PDF (3.169Mb)
Advisor
Rebentisch, Eric S.
Terms of use
Metadata
Show full item recordAbstract
Modern engineered systems are immensely complex. Extensive sets of natural language requirements guide the development of such systems. As such, tools to assist system engineers in managing and extracting information from these requirements must also scale to match the complexity of these systems. However, the systems engineering community has lagged in adopting advanced natural language processing techniques. Pre-trained language models, such as BERT, represent state-of-the-art in the field. This thesis seeks to understand if these pre-trained language models can achieve higher model performance at a lower computational and manpower cost than earlier techniques. The results show that adapting these language models through task-adaptive pretraining leads to consistent improvements in model performance and greater model robustness. These results indicate the potential of applying such language models in the systems engineering domain. However, much work remains to improve model performance and expand possible applications.
Date issued
2022-09Department
System Design and Management Program.Publisher
Massachusetts Institute of Technology