Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction
Author(s)
DelPreto, Joseph
DownloadThesis PDF (24.72Mb)
Advisor
Rus, Daniela
Terms of use
Metadata
Show full item recordAbstract
This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans’ capabilities and improve quality of life.
Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG.
Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations.
Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy.
This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.
Date issued
2021-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology