| dc.contributor.advisor | Daniela Rus. | en_US |
| dc.contributor.author | Bradford, Eric Mahathvan. | en_US |
| dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
| dc.date.accessioned | 2021-05-24T19:40:15Z | |
| dc.date.available | 2021-05-24T19:40:15Z | |
| dc.date.copyright | 2021 | en_US |
| dc.date.issued | 2021 | en_US |
| dc.identifier.uri | https://hdl.handle.net/1721.1/130681 | |
| dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021 | en_US |
| dc.description | Cataloged from the official PDF of thesis. | en_US |
| dc.description | Includes bibliographical references (pages 67-68). | en_US |
| dc.description.abstract | The demand for automation and robots is rapidly increasing across many domains which implicitly requires a simple and robust design process. The current workflow usually consists of designing and simulating robots on a computer which are then attempted to be recreated in the real-world. However, this method abstracts away important attributes about the physical terrain which can generate inaccurate simulations. This workflow also causes a linear bottleneck as designers have to iterate between their digital simulations and the real-world environment many times which is an inefficient use of time, money, and resources. We propose a new workflow and our contributions for a mixed-reality tool to design, simulate, and produce a class of link-based robots with ground locomotion in the future. Users first scan the realworld environment for the deployment of the desired robot. This process is executed through a mixed-reality headset and the layout is quickly registered on the computer. After the landscape is established, users can begin augmented robot creation with their own hands as the system is trained to track gestural control. Intuitive motions allow for fluid assembly and modifications. Throughout the design process, our work provides the capability of connecting to software to optimize robots for the given terrain. Once users are satisfied with the final design, they are ready to fabricate and assemble the final robot. This workflow is interactive, user-friendly, and ensures the robot is properly equipped for its deployed terrain. | en_US |
| dc.description.statementofresponsibility | by Eric Mahathvan Bradford. | en_US |
| dc.format.extent | 68 pages | en_US |
| dc.language.iso | eng | en_US |
| dc.publisher | Massachusetts Institute of Technology | en_US |
| dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
| dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
| dc.subject | Electrical Engineering and Computer Science. | en_US |
| dc.title | Interactively designing robots in mixed reality using gestural control | en_US |
| dc.type | Thesis | en_US |
| dc.description.degree | M. Eng. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.oclc | 1251774077 | en_US |
| dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
| dspace.imported | 2021-05-24T19:40:15Z | en_US |
| mit.thesis.degree | Master | en_US |
| mit.thesis.department | EECS | en_US |