Motion perception with conflicting or congruent visual and vestibular cues
Author(s)Rader, Andrew Alan
Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.
Charles M. Oman and Daniel M. Merfeld.
MetadataShow full item record
Introduction: We are required on a daily basis to estimate our position and motion in space by centrally combining noisy, incomplete, and potentially conflicting or ambiguous, information from both sensory sources (e.g. vestibular organs, visual, proprioceptive), and non-sensory sources (e.g. efferent copy, cognition)). This "spatial orientation" is normally subconscious, and information from multiple sense organs is automatically fused into perception. As late as the early nineteenth century, very little was known about the underlying mechanisms, and our understanding of some critical factors such as such as how the brain resolves the tilt-translation ambiguity is only now beginning to be understood. The otolith organs function like a three-axis linear accelerometer, responding to the vector difference between gravity and linear acceleration (GIF= g - a). How does the brain separate gravity from linear acceleration? How does the brain combine cues from disparate sensors to derive an overall perception of motion? What happens if these sensors provide conflicting information? Humans routinely perform balance tasks on a daily basis, sometimes in the absence of visual cues. The inherent complexity of the tasks is evidenced by the wide range of balance pathologies and locomotive difficulties experienced by people with vestibular disorders. Maintaining balance involves stabilizing the body's inverted pendulum dynamics where the center of rotation (at the ankles) is below the center of mass and the vestibular sensors are above the center of rotation (for example, swaying above the ground level or balancing during standing or walking). This type of swing motion is also encountered in most fixed-wing aircraft and flight simulators, where the pilot is above the center of roll. Swing motions where the center of mass and sensors are below the center of rotation are encountered on a child's swing, and in some high-wing aircraft and helicopters. Spatial orientation tasks requiring central integration of sensory information are ubiquitous in aerospace. Spatial disorientation, often triggered by unusual visual or flight conditions, is attributed to around 10% of aviation accidents, and many of these are fatal. Simulator training is a key factor in establishing the supremacy of instrument-driven flight information over vestibular and other human sensory cues in the absence of reliable visual information. It therefore becomes important to ensure that simulators re-create motion perceptions as accurately as possible. What cues can safely be ignored or replaced with analogous cues? How realistic and consistent must a visual scene be to maintain perceptual fidelity? Spatial orientation is also a critical human factor in spaceflight. Orientation and navigation are impaired by the lack of confirming gravitation cues in microgravity, as sensory cues are misinterpreted and generate the incorrect motion perceptions. These persist at least until the vestibular or central nervous system pathways adapt to the altered gravity environment, however human navigation never fully adapts to the three dimensional frame. There is a wealth of data describing the difficulties with balance, gait, gaze control, and spatial orientation on return to Earth. Post-flight ataxia (a neurological sign of gross incoordination of motor movements) is a serious concern for all returning space travelers for at least ten days. This would be an even more serious concern for newly arrived astronauts conducting operations extraterrestrial environments after a long space flight. What motion profiles in a lunar landing simulator on Earth will best prepare astronauts for the real task in an altered gravity environment? Far from being a problem restricted to a human operator, the aerospace systems themselves face the same challenge of integrating sensory information for navigation. Modeling how the brain performs multi-sensory integration has analogies to how aircraft and spacecraft perform this task, and in fact modelers have employed similar techniques. Thus, developments in modeling multi-sensory integration improve our understanding of both the operator and the vehicle. Specifically, this research is concerned with how human motion perception is affected during swing motion when vestibular information is incomplete or ambiguous, or when conflicting visual information is provided.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009."August 2009." Cataloged from PDF version of thesis.Includes bibliographical references (p. 93-104).
DepartmentMassachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.
Massachusetts Institute of Technology
Aeronautics and Astronautics.