dc.contributor.advisor | Solar-Lezama, Armando | |
dc.contributor.author | Inala, Jeevana Priya | |
dc.date.accessioned | 2022-06-15T13:07:14Z | |
dc.date.available | 2022-06-15T13:07:14Z | |
dc.date.issued | 2022-02 | |
dc.date.submitted | 2022-03-04T20:48:02.548Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/143249 | |
dc.description.abstract | This thesis shows that looking at intelligent systems through the lens of neurosymbolic models has several benefits over traditional deep learning approaches. Neurosymbolic models contain symbolic programmatic constructs such as loops and conditionals and continuous neural components. The symbolic part makes the model interpretable, generalizable, and robust, while the neural part handles the complexity of the intelligent systems. Concretely, this thesis presents two classes of neurosymbolic models—state-machines and neurosymbolic transformers and evaluates them on two case studies—reinforcement-learning based autonomous systems and multirobot systems. These case studies showed that the learned neurosymbolic models are human-readable, can be extrapolated to unseen scenarios, and can handle robust objectives in the specification. To efficiently learn these neurosymbolic models, we introduce neurosymbolic learning algorithms that leverage the latest techniques from machine learning and program synthesis. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright MIT | |
dc.rights.uri | http://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Neurosymbolic Learning for Robust and Reliable Intelligent Systems | |
dc.type | Thesis | |
dc.description.degree | Ph.D. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.orcid | https://orcid.org/0000-0003-1843-589X | |
mit.thesis.degree | Doctoral | |
thesis.degree.name | Doctor of Philosophy | |