| dc.contributor.author | Schamberg, Gabriel | |
| dc.contributor.author | Badgeley, Marcus | |
| dc.contributor.author | Brown, Emery Neal | |
| dc.date.accessioned | 2021-11-22T20:03:16Z | |
| dc.date.available | 2021-11-22T17:24:17Z | |
| dc.date.available | 2021-11-22T20:03:16Z | |
| dc.date.issued | 2020 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/138187.2 | |
| dc.description.abstract | Reinforcement Learning (RL) can be used to fit a mapping from patient state to a medication regimen. Prior studies have used deterministic and value-based tabular learning to learn a propofol dose from an observed anesthetic state. Deep RL replaces the table with a deep neural network and has been used to learn medication regimens from registry databases. Here we perform the first application of deep RL to closed-loop control of anesthetic dosing in a simulated environment. We use the cross-entropy method to train a deep neural network to map an observed anesthetic state to a probability of infusing a fixed propofol dosage. During testing, we implement a deterministic policy that transforms the probability of infusion to a continuous infusion rate. The model is trained and tested on simulated pharmacokinetic/pharmacodynamic models with randomized parameters to ensure robustness to patient variability. The deep RL agent significantly outperformed a proportional-integral-derivative controller (median absolute performance error 1.7% ± 0.6 and 3.4% ± 1.2). Modeling continuous input variables instead of a table affords more robust pattern recognition and utilizes our prior domain knowledge. Deep RL learned a smooth policy with a natural interpretation to data scientists and anesthesia care providers alike. | en_US |
| dc.description.sponsorship | National Institutes of Health (Grant GP01 GM118629) | en_US |
| dc.language.iso | en | |
| dc.publisher | Springer International Publishing | en_US |
| dc.relation.isversionof | 10.1007/978-3-030-59137-3_3 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | arXiv | en_US |
| dc.title | Controlling Level of Unconsciousness by Titrating Propofol with Deep Reinforcement Learning | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Schamberg, Gabriel, Badgeley, Marcus and Brown, Emery N. 2020. "Controlling Level of Unconsciousness by Titrating Propofol with Deep Reinforcement Learning." Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12299. | en_US |
| dc.contributor.department | Picower Institute for Learning and Memory | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences | en_US |
| dc.relation.journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2021-11-22T17:18:42Z | |
| dspace.orderedauthors | Schamberg, G; Badgeley, M; Brown, EN | en_US |
| dspace.date.submission | 2021-11-22T17:18:43Z | |
| mit.journal.volume | 12299 | en_US |
| mit.license | OPEN_ACCESS_POLICY | |
| mit.metadata.status | Publication Information Needed | en_US |