Learning and testing causal models with interventions
Author(s)
Acharya, J; Bhattacharyya, A; Daskalakis, C; Kandasamy, S
DownloadPublished version (390.5Kb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded “confounded components”, we show that O(log n) interventions on an unknown causal Bayesian network X on the same graph, and O(n/2) samples per intervention, suffice to efficiently distinguish whether X = M or whether there exists some intervention under which X and M are farther than in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log n) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks.
Date issued
2018-01-01Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Advances in Neural Information Processing Systems
Citation
Acharya, J, Bhattacharyya, A, Daskalakis, C and Kandasamy, S. 2018. "Learning and testing causal models with interventions." Advances in Neural Information Processing Systems, 2018-December.
Version: Final published version