Show simple item record

dc.contributor.authorKim, Seung Wook
dc.contributor.authorZhou, Yuhao
dc.contributor.authorPhilion, Jonah
dc.contributor.authorTorralba, Antonio
dc.contributor.authorFidler, Sanja
dc.date.accessioned2021-11-05T19:31:21Z
dc.date.available2021-11-05T19:31:21Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/137598
dc.description.abstract© 2020 IEEE. Simulation is a crucial component of any robotic system. In order to simulate correctly, we need to write complex rules of the environment: how dynamic agents behave, and how the actions of each of the agents affect the behavior of others. In this paper, we aim to learn a simulator by simply watching an agent interact with an environment. We focus on graphics games as a proxy of the real environment. We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training. Given a key pressed by the agent, GameGAN 'renders' the next screen using a carefully designed generative adversarial network. Our approach offers key advantages over existing work: we design a memory module that builds an internal map of the environment, allowing for the agent to return to previously visited locations with high visual consistency. In addition, GameGAN is able to disentangle static and dynamic components within an image making the behavior of the model more interpretable, and relevant for downstream tasks that require explicit reasoning over dynamic elements. This enables many interesting applications such as swapping different components of the game to build new games that do not exist. We will release the code and trained model, enabling human players to play generated games and their variations with our GameGAN.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/CVPR42600.2020.00131en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning to Simulate Dynamic Environments With GameGANen_US
dc.typeArticleen_US
dc.identifier.citationKim, Seung Wook, Zhou, Yuhao, Philion, Jonah, Torralba, Antonio and Fidler, Sanja. 2020. "Learning to Simulate Dynamic Environments With GameGAN." Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitionen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-01-28T15:48:06Z
dspace.orderedauthorsKim, SW; Zhou, Y; Philion, J; Torralba, A; Fidler, Sen_US
dspace.date.submission2021-01-28T15:48:10Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record