MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Anomaly detection through explanations

Author(s)
Gilpin, Leilani Hendrina.
Thumbnail
Download1227518564-MIT.pdf (8.996Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Gerald Jay Sussman and Lalana Kagal.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Under most conditions, complex machines are imperfect. When errors occur, as they inevitably will, these machines need to be able to (1) localize the error and (2) take appropriate action to mitigate the repercussions of a possible failure. My thesis contributes a system architecture that reconciles local errors and inconsistencies amongst parts. I represent a complex machine as a hierarchical model of introspective sub-systems working together towards a common goal. The subsystems communicate in a common symbolic language. In the process of this investigation, I constructed a set of reasonableness monitors to diagnose and explain local errors, and a system-wide architecture, Anomaly Detection through Explanations (ADE), which reconciles system-wide failures. The ADE architecture contributes an explanation synthesizer that produces an argument tree, which in turn can be backtracked and queried for support and counterfactual explanations. I have applied my results to explain incorrect labels in semi-autonomous vehicle data. A series of test simulations show the accuracy and performance of this architecture based on real-world, anomalous driving scenarios. My work has opened up the new area of explanatory anomaly detection, towards a vision in which: complex machines will be articulate by design; dynamic, internal explanations will be part of the design criteria, and system-level explanations will be able to be challenged in an adversarial proceeding.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020
 
Cataloged from student-submitted PDF of thesis.
 
Includes bibliographical references (pages 211-230).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/129250
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.