Show simple item record

dc.contributor.authorWang, Yanwei
dc.contributor.authorShah, Julie
dc.date.accessioned2022-06-15T14:42:32Z
dc.date.available2022-06-15T14:42:32Z
dc.date.issued2022-06-15
dc.identifier.urihttps://hdl.handle.net/1721.1/143430
dc.description.abstractFoundation models, which are large neural networks trained on massive datasets, have shown impressive generalization in both the language and the vision domain. While fine-tuning foundation models for new tasks at test-time is impractical due to billions of parameters in those models, prompts have been employed to re-purpose models for test-time tasks on the fly. In this report, we ideate the equivalent foundation model for motion generation and the corresponding formats of prompt that can condition such a model. The central goal is to learn a behavior prior for motion generation that can be re-used in a novel scene.en_US
dc.description.sponsorshipCSAIL NSF MI project – 6939398en_US
dc.language.isoen_USen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectRobot Learning, Large Language Models, Motion Generationen_US
dc.titleUniversal Motion Generator: Trajectory Autocompletion by Motion Promptsen_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record