dc.contributor.author | Wang, Yanwei | |
dc.contributor.author | Shah, Julie | |
dc.date.accessioned | 2022-06-15T14:42:32Z | |
dc.date.available | 2022-06-15T14:42:32Z | |
dc.date.issued | 2022-06-15 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/143430 | |
dc.description.abstract | Foundation models, which are large neural networks trained on massive datasets, have shown
impressive generalization in both the language and the vision domain. While fine-tuning foundation
models for new tasks at test-time is impractical due to billions of parameters in those models, prompts
have been employed to re-purpose models for test-time tasks on the fly. In this report, we ideate the equivalent foundation model for motion generation and the corresponding formats of prompt that can condition such a model. The central goal is to learn a behavior prior for motion generation that can be re-used in a novel scene. | en_US |
dc.description.sponsorship | CSAIL NSF MI project – 6939398 | en_US |
dc.language.iso | en_US | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Robot Learning, Large Language Models, Motion Generation | en_US |
dc.title | Universal Motion Generator: Trajectory Autocompletion by Motion Prompts | en_US |
dc.type | Working Paper | en_US |