MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explainable AI foundations to support human-robot teaching and learning

Author(s)
Booth, Serena Lynn.
Thumbnail
Download1192462745-MIT.pdf (16.03Mb)
Alternative title
Explainable artificial intelligence foundations to support human-robot teaching and learning
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Julie A. Shah.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
In human-to-human communication, the dual interactions of teaching and learning are indispensable to knowledge transfer. Explanations are core to these interactions: in humans, explanations support model comparison and reconciliation. In this thesis, we advocate for building an intuitive 'language of explanations' to enable knowledge transfer between humans and robots. Explanations can take many forms: logical statements, counterfactual realities, saliency maps, diagrams, and visualizations. While all these explanation forms are potential constituents of a language of explanations, we focus on two candidate forms: propositional logic and level set examples. Propositional logic is often assumed to be a shared language between human and machine, and is therefore often proposed as an explanation medium. For propositional logic to meet this expectation, people must be able to interpret and interact with these explanations.
 
We divide the space of propositional theories according to the knowledge compilation map, and we assess whether each form enables human interpretation. We find that humans are remarkably robust to interacting with various logical forms. However, human interpretation of propositional logic is challenged by negation of individual predicates and of logical connectives. Further, while machine computations on logical formulas are invariant to domain, human interpretation may be challenged. While propositional logic is an expressive medium, it is often insufficient for high dimensional data. To complement logic in a language of explanations, we propose the visual communication medium of level set examples: the set of inputs to a computational model which invoke a specified response. We develop a Markov Chain Monte Carlo inference technique for finding examples on the level set, and we show how these examples can be used to gain insight into model decision-making.
 
We show how this transparency-by-example technique can be used to find adversarial examples, to assess domain adaptation, and to understand model extrapolation behaviors.
 
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
 
Cataloged from the official PDF of thesis.
 
Includes bibliographical references (pages 71-78).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/127335
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.