Benefits of branches in sparsely connected networks
Author(s)
Landry, Madison
DownloadThesis PDF (13.74Mb)
Advisor
Warde, Cardinal
Terms of use
Metadata
Show full item recordAbstract
Artificial neural networks are most commonly implemented in computer software; however, real time processing and energy efficiency demands require faster and lower power alternatives. Neuromorphic engineering promises speed and energy efficiency, yet these devices can have unique constraints making them difficult to train. Motivated by optoelectronic devices, a unique class of optics-based neuromorphic hardware such as the COIN coprocessor, this thesis explores branched connections networks (BCNs), a kind of neural network in which directed connections may make additional branching connections. It focuses on effective approaches to train sparse BCNs from the bottom up and investigates the efficacy of weight perturbation for recovering sparse BCNs from fault. Under image classification tasks (MNIST & FashionMNIST), it was found that branching granted benefits to sparse BCNs in terms of performance and ability to recover from fault. An “output connectedness” notion, useful for analyzing sparse networks, is defined. To conclude, this work contributes some rules of thumb advising the future development of these optoelectronic devices.
Date issued
2022-02Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology