Interpreting neural network judgments via minimal, stable, and symbolic corrections
Author(s)
Solar Lezama, Armando; Singh, Rishabh; Zhang, Xin
DownloadPublished version (3.677Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
© 2018 Curran Associates Inc..All rights reserved. We present a new algorithm to generate minimal, stable, and symbolic corrections to an input that will cause a neural network with ReLU activations to change its output. We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output. Our algorithm generates such a correction by solving a series of linear constraint satisfaction problems. The technique is evaluated on three neural network models: one predicting whether an applicant will pay a mortgage, one predicting whether a first-order theorem can be proved efficiently by a solver using certain heuristics, and the final one judging whether a drawing is an accurate rendition of a canonical drawing of a cat.
Date issued
2018Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryCitation
Solar Lezama, Armando, Singh, Rishabh and Zhang, Xin. 2018. "Interpreting neural network judgments via minimal, stable, and symbolic corrections."
Version: Final published version