Show simple item record

dc.contributor.authorBashivan, Pouya
dc.contributor.authorTensen, Mark
dc.contributor.authorDicarlo, James
dc.date.accessioned2021-10-27T20:29:26Z
dc.date.available2021-10-27T20:29:26Z
dc.date.issued2019
dc.identifier.urihttps://hdl.handle.net/1721.1/135814
dc.description.abstract© 2019 IEEE. Much of the recent improvement in neural networks for computer vision has resulted from discovery of new networks architectures. Most prior work has used the performance of candidate models following limited training to automatically guide the search in a feasible way. Could further gains in computational efficiency be achieved by guiding the search via measurements of a high performing network with unknown detailed architecture (e.g. the primate visual system)? As one step toward this goal, we use representational similarity analysis to evaluate the similarity of internal activations of candidate networks with those of a (fixed, high performing) teacher network. We show that adopting this evaluation metric could produce up to an order of magnitude in search efficiency over performance-guided methods. Our approach finds a convolutional cell structure with similar performance as was previously found using other methods but at a total computational cost that is two orders of magnitude lower than Neural Architecture Search (NAS) and more than four times lower than progressive neural architecture search (PNAS). We further show that measurements from only ∼300 neurons from primate visual system provides enough signal to find a network with an Imagenet top-1 error that is significantly lower than that achieved by performance-guided architecture search alone. These results suggest that representational matching can be used to accelerate network architecture search in cases where one has access to some or all of the internal representations of a teacher network of interest, such as the brain's sensory processing networks.
dc.language.isoen
dc.publisherIEEE
dc.relation.isversionof10.1109/ICCV.2019.00542
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.sourcearXiv
dc.titleTeacher Guided Architecture Search
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.contributor.departmentMcGovern Institute for Brain Research at MIT
dc.relation.journalProceedings of the IEEE International Conference on Computer Vision
dc.eprint.versionOriginal manuscript
dc.type.urihttp://purl.org/eprint/type/ConferencePaper
eprint.statushttp://purl.org/eprint/status/NonPeerReviewed
dc.date.updated2021-04-15T17:44:32Z
dspace.orderedauthorsBashivan, P; Tensen, M; Dicarlo, J
dspace.date.submission2021-04-15T17:44:34Z
mit.journal.volume2019-October
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record