Improving Robot Controller Transparency Through Autonomous Policy Explanation
Author(s)
Hayes, Bradley H; Shah, Julie A
Downloadhri17.pdf (1.277Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans and robots are unlikely to share a common language to convey intentions, plans, or justifications. Even in cases where human co-workers can inspect a robot's control code, and particularly when statistical methods are used to encode control policies, there is no guarantee that meaningful insights into a robot's behavior can be derived or that a human will be able to efficiently isolate the behaviors relevant to the interaction. We present a series of algorithms and an accompanying system that enables robots to autonomously synthesize policy descriptions and respond to both general and targeted queries by human collaborators. We demonstrate applicability to a variety of robot controller types including those that utilize conditional logic, tabular reinforcement learning, and deep reinforcement learning, synthesizing informative policy descriptions for collaborators and facilitating fault diagnosis by non-experts.
Date issued
2017-03Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17
Publisher
Association for Computing Machinery (ACM)
Citation
Hayes, Bradley, and Julie A. Shah. “Improving Robot Controller Transparency Through Autonomous Policy Explanation.” Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI ’17 (2017).
Version: Author's final manuscript
ISBN
9781450343367