Show simple item record

dc.contributor.advisorJulie A. Shah.en_US
dc.contributor.authorBooth, Serena Lynn.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:52:48Z
dc.date.available2020-09-15T21:52:48Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127335
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 71-78).en_US
dc.description.abstractIn human-to-human communication, the dual interactions of teaching and learning are indispensable to knowledge transfer. Explanations are core to these interactions: in humans, explanations support model comparison and reconciliation. In this thesis, we advocate for building an intuitive 'language of explanations' to enable knowledge transfer between humans and robots. Explanations can take many forms: logical statements, counterfactual realities, saliency maps, diagrams, and visualizations. While all these explanation forms are potential constituents of a language of explanations, we focus on two candidate forms: propositional logic and level set examples. Propositional logic is often assumed to be a shared language between human and machine, and is therefore often proposed as an explanation medium. For propositional logic to meet this expectation, people must be able to interpret and interact with these explanations.en_US
dc.description.abstractWe divide the space of propositional theories according to the knowledge compilation map, and we assess whether each form enables human interpretation. We find that humans are remarkably robust to interacting with various logical forms. However, human interpretation of propositional logic is challenged by negation of individual predicates and of logical connectives. Further, while machine computations on logical formulas are invariant to domain, human interpretation may be challenged. While propositional logic is an expressive medium, it is often insufficient for high dimensional data. To complement logic in a language of explanations, we propose the visual communication medium of level set examples: the set of inputs to a computational model which invoke a specified response. We develop a Markov Chain Monte Carlo inference technique for finding examples on the level set, and we show how these examples can be used to gain insight into model decision-making.en_US
dc.description.abstractWe show how this transparency-by-example technique can be used to find adversarial examples, to assess domain adaptation, and to understand model extrapolation behaviors.en_US
dc.description.statementofresponsibilityby Serena Lynn Booth.en_US
dc.format.extent78 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleExplainable AI foundations to support human-robot teaching and learningen_US
dc.title.alternativeExplainable artificial intelligence foundations to support human-robot teaching and learningen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192462745en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:52:47Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record