Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Author(s)
Yu, Mo; Chang, Shiyu; Zhang, Yang; Jaakkola, Tommi S
DownloadAccepted version (322.3Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a cooperate game between the selector (aka rationale generator) and the predictor making use of only the selected features. The co-operative setting may, however, be compromised for two reasons. First, the generator typically has no direct access to the outcome it aims to justify, resulting in poor performance. Second, there's typically no control exerted on the information left outside the selection. We revise the overall co-operative framework to address these challenges. We introduce an introspective model which explicitly predicts and incorporates the outcome into the selection process. Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection. We show that the two complementary mechanisms maintain both high predictive accuracy and lead to comprehensive rationales.
Date issued
2019-11Department
MIT-IBM Watson AI Lab; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing
Publisher
Association for Computational Linguistics
Citation
Yu, Mo et al. "Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control." 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, November 2019, Hong Kong, China, Association for Computational Linguistics, 2019. © 2019 Association for Computational Linguistics
Version: Author's final manuscript