Towards More Generalizable Neural Networks via Modularity
Author(s)
Boopathy, Akhilan
DownloadThesis PDF (2.098Mb)
Advisor
Fiete, Ila
Terms of use
Metadata
Show full item recordAbstract
Artificial neural networks have become highly effective at performing specific, challenging tasks by leveraging a large amount of training data. However, they are unable to generalize to diverse, unseen domains without requiring significant retraining. This thesis quantifies the generalization difficulty of a task as the amount of information content in the inductive biases required to solve a task, and demonstrates that generalization difficulty relies crucially on the number of dimensions of generalization. Inspired by the modularity of biological learning systems, this thesis then demonstrates theoretically and empirically that modularity promotes generalization by providing a powerful inductive bias. Finally, the thesis proposes a new challenging spatial navigation benchmark that requires a broad degree of generalization from a small amount of training data. This benchmark is presented as a test of the generalization capability of learning algorithms; based on the results of this thesis, modularity is expected to promote generalization on this benchmark.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology