Rationalizing Neural Predictions
Author(s)
Lei, Tao; Barzilay, Regina; Jaakkola, Tommi S
DownloadAccepted version (475.4Kb)
Terms of use
Metadata
Show full item recordAbstract
Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications – rationales – that are tailored to be short and coherent, yet sufficient for making the same prediction.
Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are
passed through the encoder for prediction. Rationales are never given during training. Instead,
the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant
margin. We also successfully illustrate the method on the question retrieval task.
Date issued
2016-11Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Publisher
Association for Computational Linguistics (ACL)
Citation
Lei, Tao et al. "Rationalizing Neural Predictions." Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, November 2016, Austin, Texas, Association for Computational Linguistics, November 2016. © 2016 The Association for Computational Linguistics
Version: Author's final manuscript
ISBN
978-1-945626-25-8