Show simple item record

dc.contributor.advisorLeia A. Stirling.en_US
dc.contributor.authorSiu, Ho Chiten_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2018-11-28T15:41:42Z
dc.date.available2018-11-28T15:41:42Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/119291
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 129-142).en_US
dc.description.abstractThe operation of a powered exoskeleton is a type of human-robot interaction with extremely tight human-robot coupling. As exoskeletons become increasingly intelligent, it is increasingly appropriate to think of them not simply as tools, but rather as semi-autonomous teammates. This thesis explores the implementation, operation, and consequences of intelligent exoskeletons - teammates that move and adapt to the human to which they are physically coupled. Exoskeletons have potential applications in several domains, including strength augmentation, injury reduction, and rehabilitation. Appropriately mapping human intent to exoskeleton action is crucial. Generating this mapping can be difficult, as operator movements are constrained by the exoskeletons they are trying to control. This problem is particularly significant with upper-body exoskeletons, where high degrees of freedom allow for much less predictable motion than in the lower body. Surface electromyography (sEMG) - reading electrical signals from muscles - is one way to estimate human intent. sEMG contains anticipatory information that precedes the associated limb movement, allowing for better human-exoskeleton coordination than reactive control methods. However, sEMG is very sensitive to individual physiologies and sensor placement. We use machine learning from demonstration (LfD) to create personalized, robust sEMG mappings for exoskeleton control. We demonstrate classification of transient dynamic grasping gestures with data where sEMG sensors on the forearm have been shifted from a nominal configuration. Next, sEMG-based gesture recognition is applied to exoskeleton control, where sEMG mappings are learned as the exoskeleton is controlled with a pressure-based inputs. Finally, we analyze the human-exoskeleton team performance, fluency, and adaptation using a pressure-based controller, a static sEMG mapping, and a dynamic sEMG mapping. We show that LfD allows us to use anticipatory signaling to reduce human-exoskeleton interaction pressure. Subjects were able to adapt to all three controllers, but team performance and fluency were affected by the controller type and order of exposure. These results have implications for future exoskeleton controller design, and for exoskeleton operator training. They also open up new avenues of research in relation to adaptation to exoskeletons, intent classification algorithms, and the application of metrics from the human-robot interaction literature to the field of human exoskeleton research.en_US
dc.description.statementofresponsibilityby Ho Chit Siu.en_US
dc.format.extent185 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleMoving and adapting with a learning exoskeletonen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.identifier.oclc1061558186en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record