Show simple item record

dc.contributor.authorWayne, Greg
dc.contributor.authorKording, Konrad P.
dc.contributor.authorMarblestone, Adam Henry
dc.date.accessioned2017-02-21T19:26:38Z
dc.date.available2017-02-21T19:26:38Z
dc.date.issued2016-09
dc.date.submitted2016-06
dc.identifier.issn1662-5188
dc.identifier.urihttp://hdl.handle.net/1721.1/107008
dc.description.abstractNeuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.en_US
dc.description.sponsorshipNational Institutes of Health (U.S.) (Grant R01MH103910)en_US
dc.language.isoen_US
dc.publisherFrontiers Research Foundationen_US
dc.relation.isversionofhttp://dx.doi.org/10.3389/fncom.2016.00094en_US
dc.rightsCreative Commons Attribution 4.0 International Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourceFrontiersen_US
dc.titleToward an Integration of Deep Learning and Neuroscienceen_US
dc.typeArticleen_US
dc.identifier.citationMarblestone, Adam H., Greg Wayne, and Konrad P. Kording. “Toward an Integration of Deep Learning and Neuroscience.” Frontiers in Computational Neuroscience 10 (2016): n. pag.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.contributor.mitauthorMarblestone, Adam Henry
dc.relation.journalFrontiers in Computational Neuroscienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsMarblestone, Adam H.; Wayne, Greg; Kording, Konrad P.en_US
dspace.embargo.termsNen_US
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record