Explainable AI foundations to support human-robot teaching and learning
Author(s)Booth, Serena Lynn.
Explainable artificial intelligence foundations to support human-robot teaching and learning
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Julie A. Shah.
MetadataShow full item record
In human-to-human communication, the dual interactions of teaching and learning are indispensable to knowledge transfer. Explanations are core to these interactions: in humans, explanations support model comparison and reconciliation. In this thesis, we advocate for building an intuitive 'language of explanations' to enable knowledge transfer between humans and robots. Explanations can take many forms: logical statements, counterfactual realities, saliency maps, diagrams, and visualizations. While all these explanation forms are potential constituents of a language of explanations, we focus on two candidate forms: propositional logic and level set examples. Propositional logic is often assumed to be a shared language between human and machine, and is therefore often proposed as an explanation medium. For propositional logic to meet this expectation, people must be able to interpret and interact with these explanations.We divide the space of propositional theories according to the knowledge compilation map, and we assess whether each form enables human interpretation. We find that humans are remarkably robust to interacting with various logical forms. However, human interpretation of propositional logic is challenged by negation of individual predicates and of logical connectives. Further, while machine computations on logical formulas are invariant to domain, human interpretation may be challenged. While propositional logic is an expressive medium, it is often insufficient for high dimensional data. To complement logic in a language of explanations, we propose the visual communication medium of level set examples: the set of inputs to a computational model which invoke a specified response. We develop a Markov Chain Monte Carlo inference technique for finding examples on the level set, and we show how these examples can be used to gain insight into model decision-making.We show how this transparency-by-example technique can be used to find adversarial examples, to assess domain adaptation, and to understand model extrapolation behaviors.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020Cataloged from the official PDF of thesis.Includes bibliographical references (pages 71-78).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.