Towards Interpretable Explanations for Transfer Learning in Sequential Tasks
Author(s)
Ramakrishnan, Ramya; Shah, Julie A
DownloadAAAI-SSS16_FinalPaper.pdf (195.5Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user’s ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care.
Date issued
2016-03Department
Massachusetts Institute of Technology. Department of Aeronautics and Astronautics; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
AAAI 2016 Spring Symposium
Publisher
Association for the Advancement of Artificial Intelligence
Citation
Ramakrishnan, Ramya and Julie Shah. "Towards Interpretable Explanations for Transfer Learning in Sequential Tasks." AAAI Spring Symposium, March 21-23, 2016, Palo Alto, CA.
Version: Author's final manuscript