Learning driver gaze
Author(s)Li, Anying, M. Eng. Massachusetts Institute of Technology
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Wojciech Matusik and Antonio Torralba.
MetadataShow full item record
Driving is a singularly complex task that humans manage to perform successfully day in and day out, guided only by what their eyes can see. Given how prevalent, complex, and not to mention dangerous driving is, it's surprising that we don't really understand how drivers actually use vision to drive. The release of a large scale driving dataset with eye tracking data, DrEyeVe , makes analyzing the role of vision feasible. In this thesis, we 1) study the impact of various external features on driver attention, and 2) present a two-path deep-learning model that exploits both static and dynamic information for modeling driver gaze. Our model shows promising results against state-of-the-art saliency models, especially on sequences when the driver is not just looking straight ahead on the road. This model enables us to estimate important regions that the driver should be aware of, and potentially allows an automatic driving assistant to alert drivers of hazards on the road they haven't seen yet.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 65-69).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.