dc.contributor.advisor | Peter Szolovits. | en_US |
dc.contributor.author | Vajapey, Anuhya. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2019-12-05T18:04:52Z | |
dc.date.available | 2019-12-05T18:04:52Z | |
dc.date.copyright | 2019 | en_US |
dc.date.issued | 2019 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/123126 | |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 56-58). | en_US |
dc.description.abstract | Administering sedation to patients to avoid underdosing and overdosing is an important clinical task that remains hard to control due to lack of precision in current methods of measuring sedation. The type of drugs administered, the procedure the patient is undergoing, patient characteristics (age, gender, weight, height), even genotypes can affect the way the patient's body processes the sedation administered. Currently, sedation is administered by an attending anesthesiologist who sets a target sedation level and continuously monitors the patient with an EEG and adjusts the target level accordingly. In this thesis, I apply Fitted Q-Iteration to learn a Reinforcement Learning Model that takes in a patient's current state and predicts the dosage of sedation to administer at each second during the procedure to keep the patient's physiological variables within clinically normal ranges. I experiment with different state and action representations to demonstrate how different choices affect the policy learned by the Reinforcement Learning Model. I evaluate the results qualitatively and quantitatively through the implementation of Doubly Robust Policy Evaluation. | en_US |
dc.description.statementofresponsibility | by Anuhya Vajapey. | en_US |
dc.format.extent | 58 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Predicting optimal sedation control with reinforcement learning | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1128277299 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2019-12-05T18:04:51Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |