MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Machine comprehension for clinical case reports

Author(s)
Pham, Mai Phuong,M. Eng.Massachusetts Institute of Technology.
Thumbnail
Download1192966370-MIT.pdf (2.029Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Peter Szolovits.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Over the past decade, question answering (QA) has been an active area of research in natural language processing (NLP). Despite much progress in general knowledge tasks, question answering in specialized domains, such as healthcare and medicine, hasn't seen a breakthrough due to the lack of large, reliable datasets. Moreover, using Mechanican Turk participants to ask questions about the texts, a common approach taken by general knowledge QA datasets, is often not applicable to specialized domains due to the complexity of the texts and the need for specialized knowledge. Introduced in 2018, Clinical Case Report (CliCR, [24]) is one of a few QA datasets in the medical domain. The dataset used BMJ clinical case reports with more than 100,000 gap-filling queries about these cases. The lack of human-formed natural questions is a challenge for this dataset, as well as the generalizability of trained NLP models on it.
 
This thesis attempts different approaches to the question answering task on gap-filling queries. Besides frameworks designed specifically for filling-in-the-blanks tasks, I show that systematic modifications on the queries will allow other approaches, such as language models, to outperform conventional approaches. The BioBert QA model ([14]) achieves 55.2 exact match (EM) accuracy and 59.8 F1 score on CliCR, higher than the current best performer, gated-attention machine reader (EM=22.2, F1=32.2, [6]) and human expert readers (EM=35, F1=53.7, [24]). Moreover, this work seeks to understand if language models, such as BioBert ([14]), focus on basic linguistic elements of a question (Wh- question words, cloze position, and question mark). Through a series of experiments across 3 different QA datasets and visualization of trained attention heads, some weak attention patterns are identified.
 
However, when combined with further analysis on the role of question words in QA task, it becomes clear that BERT models might not focus on question words or cloze position, and question mark. Future extension of this thesis should seek to understand the role of questions in the QA task using language models.
 
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
 
Cataloged from the official PDF of thesis.
 
Includes bibliographical references (pages 59-61).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/127443
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.