MIT Libraries homeMIT Libraries logoDSpace@MIT

MIT
Search 
  • DSpace@MIT Home
  • Center for Brains, Minds & Machines
  • Search
  • DSpace@MIT Home
  • Center for Brains, Minds & Machines
  • Search
JavaScript is disabled for your browser. Some features of this site may not work without it.

Search

Show Advanced FiltersHide Advanced Filters

Filters

Use filters to refine the search results.

Now showing items 1-10 of 11

  • Sort Options:
  • Relevance
  • Title Asc
  • Title Desc
  • Issue Date Asc
  • Issue Date Desc
  • Results Per Page:
  • 5
  • 10
  • 20
  • 40
  • 60
  • 80
  • 100
Thumbnail

Musings on Deep Learning: Properties of SGD 

Zhang, Chiyuan; Liao, Qianli; Rakhlin, Alexander; Sridharan, Karthik; Miranda, Brando; e.a. (Center for Brains, Minds and Machines (CBMM), 2017-04-04)
[previously titled "Theory of Deep Learning III: Generalization Properties of SGD"] In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in ...
Thumbnail

Symmetry Regularization 

Anselmi, Fabio; Evangelopoulos, Georgios; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2017-05-26)
The properties of a representation, such as smoothness, adaptability, generality, equivari- ance/invariance, depend on restrictions imposed during learning. In this paper, we propose using data symmetries, in the sense of ...
Thumbnail

3D Object-Oriented Learning: An End-to-end Transformation-Disentangled 3D Representation 

Liao, Qianli; Poggio, Tomaso (2017-12-31)
We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...
Thumbnail

Theory II: Landscape of the Empirical Risk in Deep Learning 

Poggio, Tomaso; Liao, Qianli (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-03-30)
Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least for the most successful Deep ...
Thumbnail

Spatial IQ Test for AI 

Hilton, Erwin; Liao, Qianli; Poggio, Tomaso (2017-12-31)
We introduce SITD (Spatial IQ Test Dataset), a dataset used to evaluate the capabilities of computational models for pattern recognition and visual reasoning. SITD is a generator of images in the style of the Raven Progressive ...
Thumbnail

Human-like Learning: A Research Proposal 

Liao, Qianli; Poggio, Tomaso (2017-09-28)
We propose Human-like Learning, a new machine learning paradigm aiming at training generalist AI systems in a human-like manner with a focus on human-unique skills.
Thumbnail

Theory of Deep Learning IIb: Optimization Properties of SGD 

Zhang, Chiyuan; Liao, Qianli; Rakhlin, Alexander; Miranda, Brando; Golowich, Noah; e.a. (Center for Brains, Minds and Machines (CBMM), 2017-12-27)
In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence ...
Thumbnail

Do Deep Neural Networks Suffer from Crowding? 

Volokitin, Anna; Roig, Gemma; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-06-26)
Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the ...
Thumbnail

Object-Oriented Deep Learning 

Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2017-10-31)
We investigate an unconventional direction of research that aims at converting neural networks, a class of distributed, connectionist, sub-symbolic models into a symbolic level with the ultimate goal of achieving AI ...
Thumbnail

Exact Equivariance, Disentanglement and Invariance of Transformations 

Liao, Qianli; Poggio, Tomaso (2017-12-31)
Invariance, equivariance and disentanglement of transformations are important topics in the field of representation learning. Previous models like Variational Autoencoder [1] and Generative Adversarial Networks [2] attempted ...
  • 1
  • 2

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CommunityBy Issue DateAuthorsTitlesSubjects

My Account

Login

Discover

Author
Poggio, Tomaso (11)
Liao, Qianli (9)Miranda, Brando (3)Rakhlin, Alexander (2)Rosasco, Lorenzo (2)Zhang, Chiyuan (2)Anselmi, Fabio (1)Boix, Xavier (1)Evangelopoulos, Georgios (1)Golowich, Noah (1)... View MoreSubjectAI (2)artificial intelligence (1)Computer Vision and Pattern Recognition (1)Dataset (1)DCNN (1)deep convolutional neural networks (1)Deep Neural Networks (1)eccentricity-dependent (1)invariance (1)IQ Test (1)... View MoreDate Issued
2017 (11)
Has File(s)Yes (11)

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries homeMIT Libraries logo

Find us on

Twitter Facebook Instagram YouTube RSS

MIT Libraries navigation

SearchHours & locationsBorrow & requestResearch supportAbout us
PrivacyPermissionsAccessibility
MIT
Massachusetts Institute of Technology
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.