Show simple item record

dc.contributor.advisorLeia Stirling.en_US
dc.contributor.authorHall, Sherrie Alyssaen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2017-12-05T19:13:32Z
dc.date.available2017-12-05T19:13:32Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112454
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 109-115).en_US
dc.description.abstractIn exploration, scenarios can include a human working along-side - or attached to - a robot. For example, concepts of Mars human-robot exploration teams, or extravehicular activity (EVA) on the International Space Station (ISS) with an astronaut fixed to the end of a large robot arm for stability. Robots in these scenarios must be able to be directed in real time to react to environmental unknowns. In this work, use of a fully-wearable gesture system was proposed to provide control of the robot to the human in the field. A wearable gesture interface would allow user mobility in the field, would allow the user full arm range of motion when not in use, and could be built into the user's clothing to avoid requiring them to carry additional equipment for robot control. This work used the Canadarm2 as a case study for exploring implementations and input mappings for robot operations with a gesture interface in complex environments. Manual control of the Canadarm2 is difficult, involving a complex twin-joystick interface. Although astronauts on EVA often stand fixed at the end of the robot arm for stability, EVA astronauts cannot control the arm themselves, instead relying on teleoperation by a second astronaut inside the ISS. A study was conducted with a simulated Canadarm2, comparing three different gesture implementations to the traditional joystick input method. In order to test gesture control mappings of this case, a gesture interface was needed for operation. The wearable gesture system selected used integrated surface electromyography sensors and inertial measurement units to detect arm and hand gestures.Two gesture mappings permitted multiple simultaneous inputs (multi-input), while the third was a single input method. One multi-input method was inspired and aligned with natural human reach while the other divided controls between different segments of the human arm kinematic chain. The single input method exhibited high workload in addition to reduced efficiency as compared to the joystick control group. The gesture mapping inspired by human motor control showed potential for performance equivalent to traditional joystick controls after training. The multi-input mapping less aligned with natural motor control showed reduced completion rate for certain tasks and higher overall workload as compared to the joystick interface. Unlike the joystick controls, the gesture interface was limited to one rotational input at a time. To investigate potential performance effects due to such limits on controller degrees of freedom (DOF), a second study was conducted that locked different DOF in the joystick interface. Four joystick interfaces were compared: full multi-axis (with nominal six DOF), rotation limited (one rotation at a time), translation limited (one translation at a time), and without simultaneous translation/rotation or "non-bimanual." This study found no statistically significant differences in performance or workload between traditional controls and reduced rotational DOF, which was comparable to the gesture interface mapping. For the non-bimanual condition, there was an increase in task time combined with decreased multi-rotation, highlighting that non-bimanual operation may have potential in training for rotation efficiency. Two different strategies were observed during translation limiting to overcome inability to visually track, align with, and move toward the target simultaneously. This work highlights the importance of multi-input control for complex robotic teleoperation and provides recommendations for the development of input mappings and implementations of gesture control interfaces, as well as any interfaces that require reduced DOF as compared to the operational environment or system being controlled.en_US
dc.description.statementofresponsibilityby Sherrie Alyssa Hall.en_US
dc.format.extent151 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleEffect of control interface implementation on operation of a multi degree of freedom telerobotic armen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc1010807152en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record