Advanced Search

Center for Brains, Minds & Machines

Research and Teaching Output of the MIT Community

Center for Brains, Minds & Machines


The Center for Brains, Minds and Machines (CBMM) is a National Science Foundation funded Science and Technology Center on the interdisciplinary study of intelligence. This effort is a multi-institutional collaboration headquartered at the McGovern Institute for Brain Research at MIT, with Harvard University as a managing partner. Visit the CBMM website for more information.

Sub-communities within this community

Recent Submissions

  • Mhaskar, Hrushikesh; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM),, 2018-02-20)
    An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit ...
  • Cano-Córdoba, Felipe; Sarma, Sanjay; Subirana, Brian (Center for Brains, Minds and Machines (CBMM), 2017-12-05)
    In [42] we suggested that any memory stored in the human/animal brain is forgotten following the Ebingghaus curve – in this follow-on paper, we define a novel algebraic structure, a Forgetting Neural Network, as a simple ...
  • Hilton, Erwin; Liao, Qianli; Poggio, Tomaso (2017-12-31)
    We introduce SITD (Spatial IQ Test Dataset), a dataset used to evaluate the capabilities of computational models for pattern recognition and visual reasoning. SITD is a generator of images in the style of the Raven Progressive ...
  • Poggio, Tomaso; Kawaguchi, Kenji; Liao, Qianli; Miranda, Brando; Rosasco, Lorenzo; Boix, Xavier; Hidary, Jack; Mhaskar, Hrushikesh (arXiv, 2017-12-30)
    A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that ...
  • Liao, Qianli; Poggio, Tomaso (2017-12-31)
    We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...