Show simple item record

dc.contributor.advisorPeter Szolovits.en_US
dc.contributor.authorGür, Burkayen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2013-02-13T21:24:05Z
dc.date.available2013-02-13T21:24:05Z
dc.date.copyright2012en_US
dc.date.issued2012en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/76817
dc.descriptionThesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 73-74).en_US
dc.description.abstractAccurate and comprehensive data form the lifeblood of health care. Unfortunately, there is much evidence that current data collection methods sometimes fail. Our hypothesis is that it should be possible to improve the thoroughness and quality of information gathered through clinical encounters by developing a computer system that (a) listens to a conversation between a patient and a provider, (b) uses automatic speech recognition technology to transcribe that conversation to text, (c) applies natural language processing methods to extract the important clinical facts from the conversation, (d) presents this information in real time to the participants, permitting correction of errors in understanding, and (e) organizes those facts into an encounter note that could serve as a first draft of the note produces by the clinician. In this thesis, we present our attempts to measure the performances of two state-of-the-art automatic speech recognizers (ASRs) for the task of transcribing clinical conversations, and explore the potential ways of optimizing these software packages for the specific task. In the course of this thesis, we have (1) introduced a new method for quantitatively measuring the difference between two language models and showed that conversational and dictational speech have different underlying language models, (2) measured the perplexity of clinical conversations and dictations and shown that spontaneous speech has a higher perplexity than dictational speech, (3) improved speech recognition accuracy by language adaptation using a conversational corpus, and (4) introduced a fast and simple algorithm for cross talk elimination in two speaker settings.en_US
dc.description.statementofresponsibilityby Burkay Gür.en_US
dc.format.extent74 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleImproving speech recognition accuracy for clinical conversationsen_US
dc.typeThesisen_US
dc.description.degreeM.Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc825763209en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record