Representing, learning, and controlling complex object interactions
Author(s)
Zhou, Yilun; Burchfiel, Benjamin; Konidaris, George
Download10514_2018_Article_9740.pdf (1.884Mb)
PUBLISHER_CC
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
We present a framework for representing scenarios with complex object interactions, where a robot cannot directly interact with the object it wishes to control and must instead influence it via intermediate objects. For instance, a robot learning to drive a car can only change the car’s pose indirectly via the steering wheel, and must represent and reason about the relationship between its own grippers and the steering wheel, and the relationship between the steering wheel and the car. We formalize these interactions as chains and graphs of Markov decision processes (MDPs) and show how such models can be learned from data. We also consider how they can be controlled given known or learned dynamics. We show that our complex model can be collapsed into a single MDP and solved to find an optimal policy for the combined system. Since the resulting MDP may be very large, we also introduce a planning algorithm that efficiently produces a potentially suboptimal policy. We apply these models to two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game using a joystick, and using a hot water dispenser to heat a cup of water. Keywords: Robotics, Task representation, Task learning, Markov decision process
Date issued
2018-04Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Autonomous Robots
Publisher
Springer US
Citation
Zhou, Yilun, et al. “Representing, Learning, and Controlling Complex Object Interactions.” Autonomous Robots, Apr. 2018. © 2018 The Authors
Version: Final published version
ISSN
0929-5593
1573-7527