MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Overcoming Blind Spots in the Real World: Leveraging Complementary Abilities for Joint Execution

Author(s)
Ramakrishnan, Ramya; Kamar, Ece; Nushi, Besmira; Dey, Debadeepta; Shah, Julie A; Horvitz, Eric; ... Show more Show less
Thumbnail
DownloadAccepted version (1002.Kb)
Open Access Policy

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
© 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Simulators are being increasingly used to train agents before deploying them in real-world environments. While training in simulation provides a cost-effective way to learn, poorly modeled aspects of the simulator can lead to costly mistakes, or blind spots. While humans can help guide an agent towards identifying these error regions, humans themselves have blind spots and noise in execution. We study how learning about blind spots of both can be used to manage hand-off decisions when humans and agents jointly act in the real-world in which neither of them are trained or evaluated fully. The formulation assumes that agent blind spots result from representational limitations in the simulation world, which leads the agent to ignore important features that are relevant for acting in the open world. Our approach for blind spot discovery combines experiences collected in simulation with limited human demonstrations. The first step applies imitation learning to demonstration data to identify important features that the human is using but that the agent is missing. The second step uses noisy labels extracted from action mismatches between the agent and the human across simulation and demonstration data to train blind spot models. We show through experiments on two domains that our approach is able to learn a succinct representation that accurately captures blind spot regions and avoids dangerous errors in the real world through transfer of control between the agent and the human.
Date issued
2019-07
URI
https://hdl.handle.net/1721.1/137315
Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Journal
Proceedings of the AAAI Conference on Artificial Intelligence
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Citation
Ramakrishnan, Ramya, Kamar, Ece, Nushi, Besmira, Dey, Debadeepta, Shah, Julie A et al. 2019. "Overcoming Blind Spots in the Real World: Leveraging Complementary Abilities for Joint Execution." Proceedings of the AAAI Conference on Artificial Intelligence, 33.
Version: Author's final manuscript

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.