Show simple item record

dc.contributor.authorBoominathan, Soorajnath
dc.contributor.authorOberst, Michael
dc.contributor.authorZhou, Helen
dc.contributor.authorKanjilal, Sanjat
dc.contributor.authorSontag, David
dc.date.accessioned2021-11-08T16:46:37Z
dc.date.available2021-11-08T16:46:37Z
dc.date.issued2020-08
dc.identifier.urihttps://hdl.handle.net/1721.1/137708
dc.description.abstract© 2020 Owner/Author. In several medical decision-making problems, such as antibiotic prescription, laboratory testing can provide precise indications for how a patient will respond to different treatment options. This enables us to "fully observe" all potential treatment outcomes, but while present in historical data, these results are infeasible to produce in real-time at the point of the initial treatment decision. Moreover, treatment policies in these settings often need to trade off between multiple competing objectives, such as effectiveness of treatment and harmful side effects. We present, compare, and evaluate three approaches for learning individualized treatment policies in this setting: First, we consider two indirect approaches, which use predictive models of treatment response to construct policies optimal for different trade-offs between objectives. Second, we consider a direct approach that constructs such a set of policies without intermediate models of outcomes. Using a medical dataset of Urinary Tract Infection (UTI) patients, we show that all approaches learn policies that achieve strictly better performance on all outcomes than clinicians, while also trading off between different objectives. We demonstrate additional benefits of the direct approach, including flexibly incorporating other goals such as deferral to physicians on simple cases.en_US
dc.language.isoen
dc.publisherACMen_US
dc.relation.isversionof10.1145/3394486.3403245en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceACMen_US
dc.titleTreatment Policy Learning in Multiobjective Settings with Fully Observed Outcomesen_US
dc.typeArticleen_US
dc.identifier.citationBoominathan, Soorajnath, Oberst, Michael, Zhou, Helen, Kanjilal, Sanjat and Sontag, David. 2020. "Treatment Policy Learning in Multiobjective Settings with Fully Observed Outcomes." Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Institute for Medical Engineering & Science
dc.relation.journalProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Miningen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-01-26T18:52:35Z
dspace.orderedauthorsBoominathan, S; Oberst, M; Zhou, H; Kanjilal, S; Sontag, Den_US
dspace.date.submission2021-01-26T18:52:39Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record