Show simple item record

dc.contributor.authorJin, Di
dc.contributor.authorSergeeva, Elena
dc.contributor.authorWeng, Wei-Hung
dc.contributor.authorChauhan, Geeticka
dc.contributor.authorSzolovits, Peter
dc.date.accessioned2022-07-20T18:41:09Z
dc.date.available2022-07-20T18:41:09Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/1721.1/143908
dc.description.abstractThe increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL, which allows end users to evaluate the model decision making to know whether to accept or reject predictions and recommendations before an action is taken. In this review, we focus on the interpretability of the DL models in healthcare. We start by introducing the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners in this field. Besides the methods' details, we also include a discussion of advantages and disadvantages of these methods and which scenarios each of them is suitable for, so that interested readers can know how to compare and choose among them for use. Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies. Overall, we hope this survey can help researchers and practitioners in both artificial intelligence and clinical fields understand what methods we have for enhancing the interpretability of their DL models and choose the optimal one accordingly. This article is categorized under: Cancer > Computational Models.en_US
dc.language.isoen
dc.publisherWileyen_US
dc.relation.isversionof10.1002/WSBM.1548en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther repositoryen_US
dc.titleExplainable deep learning in healthcare: A methodological survey from an attribution viewen_US
dc.typeArticleen_US
dc.identifier.citationJin, Di, Sergeeva, Elena, Weng, Wei-Hung, Chauhan, Geeticka and Szolovits, Peter. 2022. "Explainable deep learning in healthcare: A methodological survey from an attribution view." WIREs Mechanisms of Disease, 14 (3).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalWIREs Mechanisms of Diseaseen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-07-20T18:36:05Z
dspace.orderedauthorsJin, D; Sergeeva, E; Weng, W-H; Chauhan, G; Szolovits, Pen_US
dspace.date.submission2022-07-20T18:36:07Z
mit.journal.volume14en_US
mit.journal.issue3en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record