Large scale video action understanding
Author(s)Yan, Tom, M. Eng. Massachusetts Institute of Technology
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
The goal of the project is to build a large scale video dataset called Moments, and train existing/novel models for action recognition. To aid automation of video collection and annotation selection, I trained Convolutional Neural Network models to estimate the likelihood of a desired action appearing in video clips. Selecting clips, which are highly probable to contain the wanted action, for annotation leads to a more efficient process overall with higher yield. Once a sizable dataset had been amassed, I investigated new multi-modal models that make use of different (spatial, temporal, auditory) signals in the video. I also conducted preliminary experiments into several promising directions that Moments opens up, including multi-label training. Lastly, I trained baseline models on Moments to calibrate the performance of existing techniques. Post-training, I diagnosed the shortcomings of the models and visualized videos that were found to be particularly difficult. I discovered that the difficulty largely arises due to the great variety in quality/perspective/subjects found in Moments videos. This highlights the challenging nature of the dataset and its value to the research community.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 37-39).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.