dc.contributor.author | Kim, Seung Wook | |
dc.contributor.author | Zhou, Yuhao | |
dc.contributor.author | Philion, Jonah | |
dc.contributor.author | Torralba, Antonio | |
dc.contributor.author | Fidler, Sanja | |
dc.date.accessioned | 2021-11-05T19:31:21Z | |
dc.date.available | 2021-11-05T19:31:21Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/137598 | |
dc.description.abstract | © 2020 IEEE. Simulation is a crucial component of any robotic system. In order to simulate correctly, we need to write complex rules of the environment: how dynamic agents behave, and how the actions of each of the agents affect the behavior of others. In this paper, we aim to learn a simulator by simply watching an agent interact with an environment. We focus on graphics games as a proxy of the real environment. We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training. Given a key pressed by the agent, GameGAN 'renders' the next screen using a carefully designed generative adversarial network. Our approach offers key advantages over existing work: we design a memory module that builds an internal map of the environment, allowing for the agent to return to previously visited locations with high visual consistency. In addition, GameGAN is able to disentangle static and dynamic components within an image making the behavior of the model more interpretable, and relevant for downstream tasks that require explicit reasoning over dynamic elements. This enables many interesting applications such as swapping different components of the game to build new games that do not exist. We will release the code and trained model, enabling human players to play generated games and their variations with our GameGAN. | en_US |
dc.language.iso | en | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.isversionof | 10.1109/CVPR42600.2020.00131 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | arXiv | en_US |
dc.title | Learning to Simulate Dynamic Environments With GameGAN | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Kim, Seung Wook, Zhou, Yuhao, Philion, Jonah, Torralba, Antonio and Fidler, Sanja. 2020. "Learning to Simulate Dynamic Environments With GameGAN." Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | |
dc.relation.journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | en_US |
dc.eprint.version | Original manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2021-01-28T15:48:06Z | |
dspace.orderedauthors | Kim, SW; Zhou, Y; Philion, J; Torralba, A; Fidler, S | en_US |
dspace.date.submission | 2021-01-28T15:48:10Z | |
mit.license | OPEN_ACCESS_POLICY | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |