Show simple item record

dc.contributor.advisorLeia Stirling.en_US
dc.contributor.authorGibson, Alison Eveen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2018-02-16T20:04:09Z
dc.date.available2018-02-16T20:04:09Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/113746
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 149-158).en_US
dc.description.abstractThe future of human space exploration will involve extra-vehicular activities (EVA) on foreign planetary surfaces (i.e. Mars), an activity that will have significantly different characteristics than exploration scenarios on Earth. These activities become challenging due to restricted vision and limitations placed on sensory feedback from altered gravity and the space suit. The use of a bulky, pressurized EVA suit perceptually disconnects human explorers from the hostile environment, increasing navigation workload and risk of collision associated with traversing through unfamiliar terrain. Due to the hazardous nature of this work, there is a critical need to design interfaces for optimizing task performance and minimizing risks; in particular, an information presentation device that can aid in obstacle avoidance during surface exploration and way-finding. Multi-modal displays are being considered as cues to multiple sensory modalities enhance cognitive processing through taking advantage of multiple sensory resources, and are believed to communicate risk more efficiently than unimodal cues. This thesis presents a wearable multi-modal interface system to examine human performance when visual, vibratory, and visual-vibratory cues are provided to aid in ground obstacle avoidance. The wearable system applies vibrotactile cues to the feet and visual cues through augmented reality glasses to convey obstacle location and proximity. An analysis of obstacle avoidance performance with the multi-modal device was performed with human subjects in a motion capture space. Metrics included completion time, subjective workload, head-down time, collisions, as well as gait parameters. The primary measures of performance were collision frequency and head-down time, as these both must be minimized in an operational environment. Results indicate that information displays enhance task performance, with the visual-only display promoting the least head-down time over tactile-only or visual-tactile displays. Head-down time was the highest for trials without a display. Results provide implications for presenting information during physically active tasks such as suited obstacle avoidance.en_US
dc.description.statementofresponsibilityby Alison Eve Gibson.en_US
dc.format.extent158 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleThe Design, development, and analysis of a wearable, multi-modal information presentation device to aid astronauts in obstacle avoidance during surface explorationen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc1021853520en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record