| dc.contributor.advisor | Kaelbling, Leslie P. |  | 
| dc.contributor.advisor | Lozano-Pérez, Tomás |  | 
| dc.contributor.author | Shen, William |  | 
| dc.date.accessioned | 2023-11-02T20:14:56Z |  | 
| dc.date.available | 2023-11-02T20:14:56Z |  | 
| dc.date.issued | 2023-09 |  | 
| dc.date.submitted | 2023-09-21T14:26:20.897Z |  | 
| dc.identifier.uri | https://hdl.handle.net/1721.1/152770 |  | 
| dc.description.abstract | Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects. |  | 
| dc.publisher | Massachusetts Institute of Technology |  | 
| dc.rights | In Copyright - Educational Use Permitted |  | 
| dc.rights | Copyright retained by author(s) |  | 
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ |  | 
| dc.title | Neural Feature Fields for Language-Guided Robot Manipulation |  | 
| dc.type | Thesis |  | 
| dc.description.degree | S.M. |  | 
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |  | 
| dc.identifier.orcid | https://orcid.org/0009-0004-0227-1071 |  | 
| mit.thesis.degree | Master |  | 
| thesis.degree.name | Master of Science in Electrical Engineering and Computer Science |  |