dc.contributor.author | Poggio, Tomaso | |
dc.date.accessioned | 2021-01-13T16:06:50Z | |
dc.date.available | 2021-01-13T16:06:50Z | |
dc.date.issued | 2021-01-12 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/129402 | |
dc.description.abstract | About fifty years ago, holography was proposed as a model of associative memory. Associative memories with similar properties were soon after implemented as simple networks of threshold neurons by Willshaw and Longuet-Higgins. In these pages I will show that today’s deep nets are an incremental improvement of the original associative networks. Thinking about deep learning in terms of associative networks provides a more realistic and sober perspective on the promises of deep learning and on its role in eventually understanding human intelligence. As a bonus, this discussion also uncovers connections with several interesting topics in applied math: random features, random projections, neural ensembles, randomized kernels, memory and generalization, vector quantization and hierarchical vector quantization, random vectors and orthogonal basis, NTK and radial kernels. | en_US |
dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
dc.relation.ispartofseries | CBMM Memo;114 | |
dc.title | From Associative Memories to Deep Networks | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |