Show simple item record

dc.contributor.advisorPeter Szolovits.en_US
dc.contributor.authorPham, Mai Phuong,M. Eng.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:58:15Z
dc.date.available2020-09-15T21:58:15Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127443
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 59-61).en_US
dc.description.abstractOver the past decade, question answering (QA) has been an active area of research in natural language processing (NLP). Despite much progress in general knowledge tasks, question answering in specialized domains, such as healthcare and medicine, hasn't seen a breakthrough due to the lack of large, reliable datasets. Moreover, using Mechanican Turk participants to ask questions about the texts, a common approach taken by general knowledge QA datasets, is often not applicable to specialized domains due to the complexity of the texts and the need for specialized knowledge. Introduced in 2018, Clinical Case Report (CliCR, [24]) is one of a few QA datasets in the medical domain. The dataset used BMJ clinical case reports with more than 100,000 gap-filling queries about these cases. The lack of human-formed natural questions is a challenge for this dataset, as well as the generalizability of trained NLP models on it.en_US
dc.description.abstractThis thesis attempts different approaches to the question answering task on gap-filling queries. Besides frameworks designed specifically for filling-in-the-blanks tasks, I show that systematic modifications on the queries will allow other approaches, such as language models, to outperform conventional approaches. The BioBert QA model ([14]) achieves 55.2 exact match (EM) accuracy and 59.8 F1 score on CliCR, higher than the current best performer, gated-attention machine reader (EM=22.2, F1=32.2, [6]) and human expert readers (EM=35, F1=53.7, [24]). Moreover, this work seeks to understand if language models, such as BioBert ([14]), focus on basic linguistic elements of a question (Wh- question words, cloze position, and question mark). Through a series of experiments across 3 different QA datasets and visualization of trained attention heads, some weak attention patterns are identified.en_US
dc.description.abstractHowever, when combined with further analysis on the role of question words in QA task, it becomes clear that BERT models might not focus on question words or cloze position, and question mark. Future extension of this thesis should seek to understand the role of questions in the QA task using language models.en_US
dc.description.statementofresponsibilityby Mai Phuong Pham.en_US
dc.format.extent61 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleMachine comprehension for clinical case reportsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192966370en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:58:15Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record