Uncertainty Quantification in Deep Learning Models of G-Computation for Outcome Prediction under Dynamic Treatment Regimes
Author(s)
Deng, Leon![Thumbnail](/bitstream/handle/1721.1/157222/deng-lydeng-meng-eecs-2024-thesis.pdf.jpg?sequence=3&isAllowed=y)
DownloadThesis PDF (2.507Mb)
Advisor
Mark, Roger G.
Lehman, Li-wei H.
Terms of use
Metadata
Show full item recordAbstract
G-Net is a neural network framework that implements g-computation, a causal inference method for making counterfactual predictions and estimating treatment effects under dynamic and time-varying treatment regimes. Two G-Net models have been successfully implemented: one that uses recurrent neural networks (RNNs) as its predictors, and one that uses transformer encoders (G-Transformer). However, one limitation of G-Net is that its counterfactual predictive density estimates do not take into account uncertainty about model parameter estimates. These uncertainty estimates are necessary for establishing confidence intervals around the effect estimation, enabling a robust assessment of whether the effects of two treatment options exhibit statistically significant differences. An important area of work is adding support for quantification of model uncertainty for conditional effect estimation. This thesis aims to add uncertainty quantification to both the RNN-based G-Net and the G-Transformer. To achieve this, we use two well-known techniques in uncertainty modeling, namely variational dropout and deep ensembling. We evaluate our methods using two simulated datasets based on mechanistic models. We demonstrate that G-Net and G-Transformer models with uncertainty quantification are better-calibrated and perform better for individual-level clinical decision making than their baseline counterparts.
Date issued
2024-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology