MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Tracking of Eye Movement Features for Individualized Assessment of Neurocognitive State Using Mobile Devices

Author(s)
Lai, Hsin-Yu
Thumbnail
DownloadThesis PDF (6.668Mb)
Advisor
Sze, Vivienne
Heldt, Thomas
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
The ability to objectively track neurocognitive state is very important in a wide variety of settings and conditions. For example, with current clinical techniques, it is difficult to assess a patient's neurodegenerative disease (e.g., Alzheimer's) state accurately and frequently. The most widely used tests are qualitative, variable, and only performed intermittently, exposing the need for quantitative, accurate, and non-obtrusive metrics to track disease progression. Clinical studies have shown that saccade latency (an eye movement measure of reaction time) and error rate (the proportion of eye movements towards the wrong direction) are significantly affected by neurocognitive states. We propose a novel system that measures and tracks these features outside of the clinical environment using videos recorded with a mobile device. It is challenging to attain this goal, given variable environments and the absence of infrared illumination, high-speed cameras, and chinrests. Several steps are taken to overcome these challenges and therefore enable tracking of eye movement features in large cohorts of subjects. We designed an app to guide subjects to record their eye movements at a proper distance in a well-lit environment. By enabling large-scale data collection, we have collected over 6,800 videos from 80 subjects across the adult age spectrum, which are about two orders of magnitude more videos than in most previous literature. To measure eye-movement features from these video recordings, we used a deep convolutional neural network for gaze estimation and model-based methods to measure saccade latency and error rates. With the frequent measurements of these features, we then designed an individualized longitudinal model using a Gaussian process that learns individual characteristics and the correlations across these eye-movement features. With a system that can measure eye-movement features on a much finer timescale in a broader population than previously available, our research opens up the possibility to understand whether eye-movement features can be used to help track neurocognitive states more frequently and accurately.
Date issued
2021-09
URI
https://hdl.handle.net/1721.1/139923
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.