Show simple item record

dc.contributor.authorShah, Julie A
dc.contributor.authorNikolaidis, Stefanos
dc.contributor.authorRamakrishnan, Ramya
dc.contributor.authorGu, Keren
dc.date.accessioned2017-04-05T20:03:20Z
dc.date.available2017-04-05T20:03:20Z
dc.date.issued2015-03
dc.identifier.isbn9781450328838
dc.identifier.urihttp://hdl.handle.net/1721.1/107887
dc.description.abstractWe present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially observable variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p<0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2696454.2696455en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleEfficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasksen_US
dc.typeArticleen_US
dc.identifier.citationNikolaidis, Stefanos et al. “Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks.” ACM Press, 2015. 189–196.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorShah, Julie A
dc.contributor.mitauthorNikolaidis, Stefanos
dc.contributor.mitauthorRamakrishnan, Ramya
dc.contributor.mitauthorGu, Keren
dc.relation.journalProceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI '15en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsNikolaidis, Stefanos; Ramakrishnan, Ramya; Gu, Keren; Shah, Julieen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0003-1338-8107
dc.identifier.orcidhttps://orcid.org/0000-0001-8239-5963
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record