MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Leveraging Humans to Detect and Fix Representation Misalignment

Author(s)
Peng, Andi
Thumbnail
DownloadThesis PDF (2.987Mb)
Advisor
Shah, Julie
Agrawal, Pulkit
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
As robots are increasingly deployed in real-world environments, a key question be- comes how to best teach them to accomplish tasks that end users want. A critical problem suffered by current robot reward and imitation learning approaches is that of representation misalignment, where the robot’s learned task representation does not fully capture the end user’s true task representation. In this work, we contend that because human users will be the ultimate evaluator of system performance in the world, it is crucial that we explicitly focus our efforts on leveraging them to detect and fix representation misalignment prior to attempting to learn their desired task. We advocate that current representation learning approaches can be studied under a single unifying formalism: the representation alignment problem. We mathematically operationalize this problem, define its desiderata, and situate current robot learning methods within this formalism. We then explore the feasibility of applying this formal- ism to robots trained end-to-end on visual input, where deployment failures can be caused by two types of error: errors due to an inability to infer the user’s true reward vs. errors due to knowing how to take correct actions in the desired state. We develop a human-in-the-loop framework—DFA (Diagnosis, Feedback, Adaptation)—to query for user feedback to perform efficient policy adaptation. In experiments with real human users in both discrete and continuous control domains, we show that our framework can help users diagnose the underlying source of representation misalignment more accurately than from robot behaviour alone. To conclude, we show how to leverage this feedback to improve model performance while minimizing human effort and discuss open challenges of using humans to detect and fix representation misalignment.
Date issued
2023-02
URI
https://hdl.handle.net/1721.1/150218
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.