A multisensory observer model for human spatial orientation perception
Author(s)
Newman, Michael C. (Michael Charles)
DownloadFull printable version (39.51Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.
Advisor
Charles M. Oman.
Terms of use
Metadata
Show full item recordAbstract
Quantitative "observer" models for spatial orientation and eye movements have been developed based on 1-G data from humans and animals (e.g. Oman 1982, 1991, Merfeld, et al 1993, 2002; Haslwanter 2000, Vingerhoets 2006). These models assume that the CNS estimates "down", head angular velocity and linear acceleration utilizing an internal model for gravity and sense organ dynamics, continuously updated by sensory-conflict signals. CNS function is thus analogous to a Luenberger state observer in engineering systems. Using a relatively small set of free parameters, Observer orientation models capture the main features of experimental data for a variety of different motion stimuli. We developed a Matlab/Simulink based Observer model, including Excel spreadsheet input capability and a GUI to make the model accessible to less expert Matlab users. Orientation and motion predictions can be plotted in 2D or visualized in 3D using virtual avatars. Our Observer's internal model now computes azimuth, and pseudointegrates linear motion in an allocentric reference frame (perceived north-east-down). The model mimics the large perceptual errors for vertical motion observed experimentally. It retains the well validated "vestibular core" of the Merfeld perceptual model and predicts responses to angular velocity and linear accelerations steps, dumping, fixed radius centrifugation, roll tilt and OVAR. This model was further extended to include static and dynamic visual sensory information from four independent visual sensors (Visual Velocity, Position, Angular Velocity and Gravity/"Down"). (cont.) Visual additions were validated against the Borah et al (1978) Kalman filter simulation results and validation data sets (Earth vertical constant velocity rotation in the light, somatogravic illusion in the light, and linear and circular vection). The model predicts that circular vection should have two dynamic components, and the recent finding of Tokumaru et al (1998) that visual cues influence somatogravic illusion in ways not accounted for by the Borah model. The model also correctly predicts both the direction of Coriolis illusion, and the magnitude of the resulting tilt illusion. It also predicts that the direction and mechanism of Pseudo-Coriolis illusion is fundamentally different from Coriolis, a prediction verified by means of a pilot experiment. Finally, the model accounts for the dynamics of astronaut post-flight tilt-gain and OTTR vertigos in ways not explained by previous static analyses (e.g. Merfeld, 2003). Supported by the National Space Biomedical Research Institute through NASA NCC 9-58.
Description
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009. Includes bibliographical references (p. 37-41).
Date issued
2009Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsPublisher
Massachusetts Institute of Technology
Keywords
Aeronautics and Astronautics.