Generating videos with scene dynamics
Author(s)
Vondrick, Carl; Pirsiavash, Hamed; Torralba, Antonio
DownloadPublished version (1.889Mb)
Terms of use
Metadata
Show full item recordAbstract
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. ©2016
Presented at a poster session of the Conference on Neural Information Processing Systems (NIPS 2016), December 5-10, 2016, Barcelona, Spain
Date issued
2016Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Advances in Neural Information Processing Systems
Citation
Vondrick, Carl, Hamed Pirsiavash, and Antonio Torralba, "Generating videos with scene dynamics." Advances in Neural Information Processing Systems 29 (2016) url https://papers.nips.cc/paper/6194-generating-videos-with-scene-dynamics ©2016 Author(s)
Version: Final published version