Anticipating Visual Representations from Unlabeled Video
Author(s)
Vondrick, Carl; Pirsiavash, Hamed; Torralba, Antonio
DownloadTorralba_Anticipating visual.pdf (3.554Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.
Date issued
2016-12Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Vondrick, Carl, Hamed Pirsiavash, and Antonio Torralba. “Anticipating Visual Representations from Unlabeled Video.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016, Las Vegas, Nevada, IEEE, 2016. pp. 98-106
Version: Author's final manuscript
ISBN
978-1-4673-8851-1