Show simple item record

dc.contributor.advisorHenry C. Chueh and G. Octo Barnett.en_US
dc.contributor.authorLasko, Thomas A. (Thomas Anton), 1965-en_US
dc.contributor.otherHarvard University--MIT Division of Health Sciences and Technology.en_US
dc.date.accessioned2005-09-27T17:10:48Z
dc.date.available2005-09-27T17:10:48Z
dc.date.copyright2004en_US
dc.date.issued2004en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/28587
dc.descriptionThesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2004.en_US
dc.descriptionIncludes bibliographical references (p. 37-39).en_US
dc.description.abstractThis paper demonstrates that one can infer with respectable accuracy a physician's view of the therapeutic relationship that he or she has with a given patient, using data available in the patient's electronic medical record. In this study, we differentiate between the active primary relationship, the inactive primary or non-attending relationship, and the coverage relationship. We demonstrate that a single model built using the Averaged One-Dependence Estimator (AODE) classifier and learned with six attributes taken from patient visit history and physician practice characteristics can, for most of the 18 physicians tested, differentiate patients with a coverage relationship to a given physician from those with a primary relationship, achieving accuracies of 0.90 or greater as determined by the area under the receiver operating characteristic curve. Three of the 18 datasets had too few coverage patients to adequately characterize. We also demonstrate that, surprisingly, physicians are generally of like mind when assessing the therapeutic relationship that they have with a given patient. We find that for all physicians in our sample, a model built individually with any one physician's assessments performs statistically identically to the model built from the assessments of all other physicians combined. As a sub-goal of this research, we test the performance of different attribute selection methods on our dataset, comparing greedy vs. randomized search and wrapper vs. filter evaluators and finding no practical difference between them for our data. We also test the performance of several different classifiers, with AODE emerging as the best choice for this dataset. Lastly, we test the performance of linear vs. non-linear meta-learners for Stackeden_US
dc.description.abstract(cont.) Generalization on our dataset, and find no increase in accuracy for the more complex meta-learners.en_US
dc.description.statementofresponsibilityby Thomas A. Lasko.en_US
dc.format.extent45 p.en_US
dc.format.extent2778670 bytes
dc.format.extent2781919 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectHarvard University--MIT Division of Health Sciences and Technology.en_US
dc.titleWhen my patient is not my patient : inferring primary-care relationships using machine learningen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentHarvard University--MIT Division of Health Sciences and Technology
dc.identifier.oclc57489996en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record