| dc.contributor.advisor | Shah, Julie | |
| dc.contributor.advisor | Agrawal, Pulkit | |
| dc.contributor.author | Peng, Andi | |
| dc.date.accessioned | 2023-03-31T14:40:22Z | |
| dc.date.available | 2023-03-31T14:40:22Z | |
| dc.date.issued | 2023-02 | |
| dc.date.submitted | 2023-02-28T14:35:59.763Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/150218 | |
| dc.description.abstract | As robots are increasingly deployed in real-world environments, a key question be- comes how to best teach them to accomplish tasks that end users want. A critical problem suffered by current robot reward and imitation learning approaches is that of representation misalignment, where the robot’s learned task representation does not fully capture the end user’s true task representation. In this work, we contend that because human users will be the ultimate evaluator of system performance in the world, it is crucial that we explicitly focus our efforts on leveraging them to detect and fix representation misalignment prior to attempting to learn their desired task. We advocate that current representation learning approaches can be studied under a single unifying formalism: the representation alignment problem. We mathematically operationalize this problem, define its desiderata, and situate current robot learning methods within this formalism. We then explore the feasibility of applying this formal- ism to robots trained end-to-end on visual input, where deployment failures can be caused by two types of error: errors due to an inability to infer the user’s true reward vs. errors due to knowing how to take correct actions in the desired state. We develop a human-in-the-loop framework—DFA (Diagnosis, Feedback, Adaptation)—to query for user feedback to perform efficient policy adaptation. In experiments with real human users in both discrete and continuous control domains, we show that our framework can help users diagnose the underlying source of representation misalignment more accurately than from robot behaviour alone. To conclude, we show how to leverage this feedback to improve model performance while minimizing human effort and discuss open challenges of using humans to detect and fix representation misalignment. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright MIT | |
| dc.rights.uri | http://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Leveraging Humans to Detect and Fix Representation Misalignment | |
| dc.type | Thesis | |
| dc.description.degree | S.M. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| dc.identifier.orcid | 0000-0001-8136-6175 | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Science in Electrical Engineering and Computer Science | |