Show simple item record

dc.contributor.advisorRebentisch, Eric S.
dc.contributor.authorLim, Shao Cong
dc.date.accessioned2023-01-19T19:48:01Z
dc.date.available2023-01-19T19:48:01Z
dc.date.issued2022-09
dc.date.submitted2022-10-12T16:05:12.911Z
dc.identifier.urihttps://hdl.handle.net/1721.1/147405
dc.description.abstractModern engineered systems are immensely complex. Extensive sets of natural language requirements guide the development of such systems. As such, tools to assist system engineers in managing and extracting information from these requirements must also scale to match the complexity of these systems. However, the systems engineering community has lagged in adopting advanced natural language processing techniques. Pre-trained language models, such as BERT, represent state-of-the-art in the field. This thesis seeks to understand if these pre-trained language models can achieve higher model performance at a lower computational and manpower cost than earlier techniques. The results show that adapting these language models through task-adaptive pretraining leads to consistent improvements in model performance and greater model robustness. These results indicate the potential of applying such language models in the systems engineering domain. However, much work remains to improve model performance and expand possible applications.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleA Case for Pre-trained Language Models in Systems Engineering
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentSystem Design and Management Program.
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Engineering and Management


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record