Show simple item record

dc.contributor.authorPinto, Nicolas
dc.contributor.authorDoukhan, David
dc.contributor.authorDiCarlo, James
dc.contributor.authorCox, David D.
dc.date.accessioned2010-03-09T19:04:15Z
dc.date.available2010-03-09T19:04:15Z
dc.date.issued2009-11
dc.date.submitted2009-06
dc.identifier.issn1553-7358
dc.identifier.urihttp://hdl.handle.net/1721.1/52429
dc.description.abstractWhile many models of biological object recognition share a common set of “broad-stroke” properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model—e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct “parts” have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.en
dc.language.isoen_US
dc.publisherPublic Library of Scienceen
dc.relation.isversionofhttp://dx.doi.org/10.1371/journal.pcbi.1000579en
dc.rightsCreative Commons Attributionen
dc.rights.urihttp://creativecommons.org/licenses/by/2.5/en
dc.sourcePLoSen
dc.titleA high-throughput screening approach to discovering good forms of inspired visual representationen
dc.typeArticleen
dc.identifier.citationPinto N, Doukhan D, DiCarlo JJ, Cox DD (2009) A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation. PLoS Comput Biol 5(11): e1000579. doi:10.1371/journal.pcbi.1000579en
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.departmentMcGovern Institute for Brain Research at MITen_US
dc.contributor.approverDiCarlo, James
dc.contributor.mitauthorPinto, Nicolas
dc.contributor.mitauthorDoukhan, David
dc.contributor.mitauthorDiCarlo, James
dc.contributor.mitauthorCox, David D.
dc.relation.journalPLoS Computational Biologyen
dc.eprint.versionFinal published versionen
dc.identifier.pmid19956750
dc.type.urihttp://purl.org/eprint/type/JournalArticleen
eprint.statushttp://purl.org/eprint/status/PeerRevieweden
dspace.orderedauthorsPinto, Nicolas; Doukhan, David; DiCarlo, James J.; Cox, David D.en
dc.identifier.orcidhttps://orcid.org/0000-0002-1592-5896
dc.identifier.orcidhttps://orcid.org/0000-0002-2189-9743
mit.licensePUBLISHER_CCen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record