Exploiting Compositionality to Explore a Large Space of Model Structures
Author(s)Grosse, Roger Baker; Salakhutdinov, Ruslan; Freeman, William T.; Tenenbaum, Joshua B.
MetadataShow full item record
The recent proliferation of richly structured probabilistic models raises the question of how to automatically determine an appropriate model for a dataset. We investigate this question for a space of matrix decomposition models which can express a variety of widely used models from unsupervised learning. To enable model selection, we organize these models into a context-free grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 2500 structures using a small toolbox of reusable algorithms. Using a greedy search over our grammar, we automatically choose the decomposition structure from raw data by evaluating only a small fraction of all models. The proposed method typically finds the correct structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns sensible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.
DepartmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Proceedings of the 28th Conference on Uncertainly in Artificial Intelligence (2012)
Grosse, Roger B., Ruslan Salakhutdinov, William T. Freeman, and Joshua B. Tenenbaum. "Exploiting Compositionality to Explore a Large Space of Model Structures." In 28th Conference on Uncertainly in Artificial Intelligence (2012), Catalina Island, United States, August 15-17, 2012. AUAI Press, pp. 306-315.
Author's final manuscript