Show simple item record

dc.contributor.authorGan, Yulu
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2024-07-15T16:01:53Z
dc.date.available2024-07-15T16:01:53Z
dc.date.issued2024-07-13
dc.identifier.urihttps://hdl.handle.net/1721.1/155675
dc.description.abstractThe Average Gradient Outer Product (AGOP) provides a novel approach to feature learning in neural networks. We applied both AGOP and Gradient Descent to learn the matrix M in the Hyper Basis Function Network (HyperBF) and observed very similar performance. We show formally that AGOP is a greedy approximation of gradient descent.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;148
dc.titleFor HyperBFs AGOP is a greedy approximation to gradient descenten_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeOtheren_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record