Show simple item record

dc.contributor.authorLiao, Qianli
dc.contributor.authorZiyin, Liu
dc.contributor.authorGan, Yulu
dc.contributor.authorCheung, Brian
dc.contributor.authorHarnett, Mark
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2024-12-29T21:05:15Z
dc.date.available2024-12-29T21:05:15Z
dc.date.issued2024-12-28
dc.identifier.urihttps://hdl.handle.net/1721.1/157934
dc.description.abstractOver the last four decades, the amazing success of deep learning has been driven by the use of Stochastic Gradient Descent (SGD) as the main optimization technique. The default implementation for the computation of the gradient for SGD is backpropagation, which, with its variations, is used to this day in almost all computer implementations. From the perspective of neuroscientists, however, the consensus is that backpropagation is unlikely to be used by the brain. Though several alternatives have been discussed, none is so far supported by experimental evidence. Here we propose a circuit for updating the weights in a network that is biologically plausible, works as well as backpropagation, and leads to verifiable predictions about the anatomy and the physiology of a characteristic motif of four plastic synapses between ascending and descending cortical streams. A key prediction of our proposal is a surprising property of self-assembly of the basic circuit, emerging from initial random connectivity and heterosynaptic plasticity rules.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;152
dc.titleSelf-Assembly of a Biologically Plausible Learning Circuiten_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record