Designing interpretable molecular property predictors
Author(s)
Buduma, Nithin.
Download1192539457-MIT.pdf (817.7Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Tommi S. Jaakkola.
Terms of use
Metadata
Show full item recordAbstract
Complex neural models often suffer from a lack of interpretability, i.e., they lack methodology for justifying their predictions. For example, while there have been many performance improvements in molecular property prediction, these advances have come in the form of black box models. As deep learning and chemistry are becoming increasingly intertwined, it is imperative that we continue to investigate interpretability of associated models. We propose a method to augment property predictors with extractive rationalization, where the model selects a subset of the input, or rationale, that it believes to be most relevant for the property of interest. These rationales serve as the model's explanations for its decisions. We show that our methodology can generate reasonable rationales while also maintaining predictive performance, and propose some future directions.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 47-48).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.