Causal effect inference with deep latent-variable models
Author(s)
Louizos, Christos; Shalit, Uri; Mooij, Joris; Sontag, David Alexander; Zemel, Richard; Welling, Max; ... Show more Show less
DownloadPublished version (441.9Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2017 Neural information processing systems foundation. All rights reserved. Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
Date issued
2017-12Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Institute for Medical Engineering & ScienceJournal
Advances in Neural Information Processing Systems
Citation
2017. "Causal effect inference with deep latent-variable models." Advances in Neural Information Processing Systems, 2017-December.
Version: Final published version