dc.contributor.author | Gan, Yulu | |
dc.contributor.author | Poggio, Tomaso | |
dc.date.accessioned | 2024-07-15T16:01:53Z | |
dc.date.available | 2024-07-15T16:01:53Z | |
dc.date.issued | 2024-07-13 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/155675 | |
dc.description.abstract | The Average Gradient Outer Product (AGOP) provides a novel approach to feature learning in neural networks. We applied both AGOP and Gradient Descent to learn the matrix M in the Hyper Basis Function Network (HyperBF) and observed very similar performance. We show formally that AGOP is a greedy approximation of gradient descent. | en_US |
dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
dc.relation.ispartofseries | CBMM Memo;148 | |
dc.title | For HyperBFs AGOP is a greedy approximation to gradient descent | en_US |
dc.type | Article | en_US |
dc.type | Technical Report | en_US |
dc.type | Other | en_US |