Show simple item record

dc.contributor.authorBresler, Guy
dc.contributor.authorGamarnik, David
dc.contributor.authorShah, Devavrat
dc.date.accessioned2016-02-01T18:32:57Z
dc.date.available2016-02-01T18:32:57Z
dc.date.issued2014
dc.identifier.issn1049-5258
dc.identifier.urihttp://hdl.handle.net/1721.1/101040
dc.description.abstractIn this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i.i.d. samples. Our first result is an unconditional computational lower bound of Ω(p[superscript d/2]) for learning general graphical models on p nodes of maximum degree d, for the class of statistical algorithms recently introduced by Feldman et al. The construction is related to the notoriously difficult learning parities with noise problem in computational learning theory. Our lower bound shows that the [~ over O](p[superscript d+2]) runtime required by Bresler, Mossel, and Sly's exhaustive-search algorithm cannot be significantly improved without restricting the class of models. Aside from structural assumptions on the graph such as it being a tree, hypertree, tree-like, etc., most recent papers on structure learning assume that the model has the correlation decay property. Indeed, focusing on ferromagnetic Ising models, Bento and Montanari showed that all known low-complexity algorithms fail to learn simple graphs when the interaction strength exceeds a number related to the correlation decay threshold. Our second set of results gives a class of repelling (antiferromagnetic) models that have the \emph{opposite} behavior: very strong repelling allows efficient learning in time [~ over O](p[superscript 2]). We provide an algorithm whose performance interpolates between [~ over O](p[superscript 2]) and [~ over O](p[superscript d+2]) depending on the strength of the repulsion.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant CMMI-1335155)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant CNS-1161964)en_US
dc.description.sponsorshipUnited States. Army Research Office. Multidisciplinary University Research Initiative (Award W911NF-11-1-0036)en_US
dc.language.isoen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.isversionofhttps://papers.nips.cc/paper/5319-structure-learning-of-antiferromagnetic-ising-modelsen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceMIT web domainen_US
dc.titleStructure Learning of Antiferromagnetic Ising Modelsen_US
dc.typeArticleen_US
dc.identifier.citationBresler, Guy, David Gamarnik, and Devavrat Shah. "Structure Learning of Antiferromagnetic Ising Models." Advances in Neural Information Processing Systems 27 (NIPS 2014).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.departmentSloan School of Managementen_US
dc.contributor.mitauthorBresler, Guyen_US
dc.contributor.mitauthorGamarnik, Daviden_US
dc.contributor.mitauthorShah, Devavraten_US
dc.relation.journalAdvances in Neural Information Processing Systems (NIPS)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsBresler, Guy; Gamarnik, David; Shah, Devavraten_US
dc.identifier.orcidhttps://orcid.org/0000-0001-8898-8778
dc.identifier.orcidhttps://orcid.org/0000-0003-0737-3259
dc.identifier.orcidhttps://orcid.org/0000-0003-1303-582X
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record