Show simple item record

dc.contributor.advisorOliva, Aude
dc.contributor.authorPan, Bowen
dc.date.accessioned2023-07-31T19:56:01Z
dc.date.available2023-07-31T19:56:01Z
dc.date.issued2023-06
dc.date.submitted2023-07-13T14:26:22.379Z
dc.identifier.urihttps://hdl.handle.net/1721.1/151649
dc.description.abstractRecognizing real-world videos is a challenging task that requires the use of deep learning models. These models, however, require extensive computational resources to achieve robust recognition. One of the main challenges when dealing with real-world videos is the high correlation of information across frames. This results in redundancy in either temporal or spatial feature maps of the models, or both. The amount of redundancy largely depends on the dynamics and events captured in the video. For example, static videos typically have more temporal redundancy, while videos focusing on objects tend to have more channel redundancy. To address this challenge, we propose a novel approach that reduces redundancy by using an input-dependent policy to determine the necessary features for both temporal and channel dimensions. By doing so, we can identify the most relevant information for each frame, thus reducing the overall computational load. After computing the necessary features, we reconstruct the remaining redundant features from those using cheap linear operations. This not only reduces the computational cost of the model but also keeps the capacity of the original model intact. Moreover, our proposed approach has the potential to improve the accuracy of real-world video recognition by reducing overfitting caused by the redundancy of information across frames. By focusing on the most relevant information, our model can better capture the unique characteristics of each video, resulting in more accurate predictions. Overall, our approach represents a significant step forward in the field of real-world video recognition and has the potential to enable the development of more efficient and accurate deep learning models for this task.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleDynamic Neural Network for Efficient Video Recognition
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record