Learning to collaborate : robots building together
Author(s)
Hajash, Kathleen Sofia
DownloadFull printable version (3.090Mb)
Alternative title
Robots building together
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Skylar Tibbits and Patrick Winston.
Terms of use
Metadata
Show full item recordAbstract
Since robots were first invented, robotic assembly has been an important area of research in both academic institutions and industry settings. The standard industry approach to robotic assembly lines utilizes fixed robotic arms and prioritizes speed and precision over customization. With a recent shift towards mobile multi-robot teams, researchers have developed a variety of approaches ranging from planning with uncertainty to swarm robotics. However, existing approaches to robotic assembly are either too rigid, with a deterministic planning approach, or do not take advantage of the opportunities available with multiple robots. If we are to push the boundaries of robotic assembly, then we need to make collaborative robots that can work together, without human intervention, to plan and build large structures that they could not complete alone. By developing teams of robots that can collaboratively work together to plan and build large structures, we could aid in disaster relief, enable construction in remote locations, and support the health of construction workers in hazardous environments. In this thesis, I take a first step towards this vision by developing a simple collaborative task wherein agents learn to work together to move rectilinear blocks. I define robotic collaboration as an emergent process that evolves as multiple agents, simulated or physical, learn to work together to achieve a common goal that they could not achieve in isolation. Rather than taking an explicit planning approach, I employ an area of research in artificial intelligence called reinforcement learning, where agents learn an optimal behavior to achieve a specific goal by receiving rewards or penalties for good and bad behavior, respectively. In this thesis, I defined a framework for training the agents and a goal for them to accomplish. I designed, programmed and built two iterations of physical robots. I developed numerous variations of simulation environments for both single and multiple agents, evaluated reinforcement learning algorithms and selected an approach, and established a method for transferring a trained policy to physical robots.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Architecture, 2018. S.M. !c Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2018 This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged student-submitted from PDF version of thesis. Includes bibliographical references (pages 69-72).
Date issued
2018Department
Massachusetts Institute of Technology. Department of Architecture; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Architecture., Electrical Engineering and Computer Science.