Show simple item record

dc.contributor.authorYala, Adam
dc.contributor.authorMikhael, Peter G
dc.contributor.authorLehman, Constance
dc.contributor.authorLin, Gigin
dc.contributor.authorStrand, Fredrik
dc.contributor.authorWan, Yung-Liang
dc.contributor.authorHughes, Kevin
dc.contributor.authorSatuluru, Siddharth
dc.contributor.authorKim, Thomas
dc.contributor.authorBanerjee, Imon
dc.contributor.authorGichoya, Judy
dc.contributor.authorTrivedi, Hari
dc.contributor.authorBarzilay, Regina
dc.date.accessioned2022-05-25T18:40:35Z
dc.date.available2022-05-25T18:40:35Z
dc.date.issued2022-01
dc.identifier.urihttps://hdl.handle.net/1721.1/142737
dc.description.abstractScreening programs must balance the benefit of early detection with the cost of overscreening. Here, we introduce a novel reinforcement learning-based framework for personalized screening, Tempo, and demonstrate its efficacy in the context of breast cancer. We trained our risk-based screening policies on a large screening mammography dataset from Massachusetts General Hospital (MGH; USA) and validated this dataset in held-out patients from MGH and external datasets from Emory University (Emory; USA), Karolinska Institute (Karolinska; Sweden) and Chang Gung Memorial Hospital (CGMH; Taiwan). Across all test sets, we find that the Tempo policy combined with an image-based artificial intelligence (AI) risk model is significantly more efficient than current regimens used in clinical practice in terms of simulated early detection per screen frequency. Moreover, we show that the same Tempo policy can be easily adapted to a wide range of possible screening preferences, allowing clinicians to select their desired trade-off between early detection and screening costs without training new policies. Finally, we demonstrate that Tempo policies based on AI-based risk models outperform Tempo policies based on less accurate clinical risk models. Altogether, our results show that pairing AI-based risk models with agile AI-designed screening policies has the potential to improve screening programs by advancing early detection while reducing overscreening.en_US
dc.language.isoen
dc.publisherSpringer Science and Business Media LLCen_US
dc.relation.isversionof10.1038/s41591-021-01599-wen_US
dc.rightsCreative Commons Attribution-NonCommercial-ShareAlike 4.0 Internationalen_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther Repositoryen_US
dc.titleOptimizing risk-based breast cancer screening policies with reinforcement learningen_US
dc.typeArticleen_US
dc.identifier.citationYala, Adam, Mikhael, Peter G, Lehman, Constance, Lin, Gigin, Strand, Fredrik et al. 2022. "Optimizing risk-based breast cancer screening policies with reinforcement learning." Nature Medicine, 28 (1).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalNature Medicineen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-05-25T18:19:39Z
dspace.orderedauthorsYala, A; Mikhael, PG; Lehman, C; Lin, G; Strand, F; Wan, Y-L; Hughes, K; Satuluru, S; Kim, T; Banerjee, I; Gichoya, J; Trivedi, H; Barzilay, Ren_US
dspace.date.submission2022-05-25T18:19:42Z
mit.journal.volume28en_US
mit.journal.issue1en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record