Learning driver gaze
Author(s)
Li, Anying, M. Eng. Massachusetts Institute of Technology
DownloadFull printable version (21.44Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Wojciech Matusik and Antonio Torralba.
Terms of use
Metadata
Show full item recordAbstract
Driving is a singularly complex task that humans manage to perform successfully day in and day out, guided only by what their eyes can see. Given how prevalent, complex, and not to mention dangerous driving is, it's surprising that we don't really understand how drivers actually use vision to drive. The release of a large scale driving dataset with eye tracking data, DrEyeVe [1], makes analyzing the role of vision feasible. In this thesis, we 1) study the impact of various external features on driver attention, and 2) present a two-path deep-learning model that exploits both static and dynamic information for modeling driver gaze. Our model shows promising results against state-of-the-art saliency models, especially on sequences when the driver is not just looking straight ahead on the road. This model enables us to estimate important regions that the driver should be aware of, and potentially allows an automatic driving assistant to alert drivers of hazards on the road they haven't seen yet.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 65-69).
Date issued
2017Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.