Show simple item record

dc.contributor.authorGreenewald, Kristjan
dc.contributor.authorKatz, Dmitriy
dc.contributor.authorShanmugam, Karthikeyan
dc.contributor.authorMagliacane, Sara
dc.contributor.authorKocaoglu, Murat
dc.contributor.authorBoix-Adsera, Enric
dc.contributor.authorBresler, Guy
dc.date.accessioned2021-03-04T15:48:45Z
dc.date.available2021-03-04T15:48:45Z
dc.date.issued2019-12
dc.identifier.issn1049-5258
dc.identifier.urihttps://hdl.handle.net/1721.1/130081
dc.description.abstractNeural information processing systems foundation. All rights reserved. We consider the problem of experimental design for learning causal graphs that have a tree structure. We propose an adaptive framework that determines the next intervention based on a Bayesian prior updated with the outcomes of previous experiments, focusing on the setting where observational data is cheap (assumed infinite) and interventional data is expensive. While information greedy approaches are popular in active learning, we show that in this setting they can be exponentially suboptimal (in the number of interventions required), and instead propose an algorithm that exploits graph structure in the form of a centrality measure. If each intervention yields a very large data sample, we show that the algorithm requires a number of interventions less than or equal to a factor of 2 times the minimum achievable number. We show that the algorithm and the associated theory can be adapted to the setting where each performed intervention yields finitely many samples. Several extensions are also presented, to the case where a specified set of nodes cannot be intervened on, to the case where K interventions are scheduled at once, and to the fully adaptive case where each experiment yields only one sample. In the case of finite interventional data, through simulated experiments we show that our algorithms outperform different adaptive baseline algorithms.en_US
dc.language.isoen
dc.publisherMorgan Kaufmann Publishersen_US
dc.relation.isversionofhttps://papers.nips.cc/paper/2019/hash/5ee5605917626676f6a285fa4c10f7b0-Abstract.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleSample efficient active learning of causal treesen_US
dc.typeArticleen_US
dc.identifier.citationGreenewald, Kristjan et al. “Sample efficient active learning of causal trees.” Paper in the Advances in Neural Information Processing Systems, 32, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancourver Canada, Dec 8-14, 2019, Morgan Kaufmann © 2019 The Author(s)en_US
dc.contributor.departmentMIT-IBM Watson AI Laben_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-03T16:31:17Z
dspace.orderedauthorsGreenewald, K; Katz, D; Shanmugam, K; Magliacane, S; Kocaoglu, M; Boix-Adserà, E; Bresler, Gen_US
dspace.date.submission2020-12-03T16:31:22Z
mit.journal.volume32en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record