Show simple item record

dc.contributor.advisorJonathan P. How.en_US
dc.contributor.authorOmidshafiei, Shayeganen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2016-03-03T20:29:09Z
dc.date.available2016-03-03T20:29:09Z
dc.date.copyright2015en_US
dc.date.issued2015en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/101447
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 129-139).en_US
dc.description.abstractPlanning, control, perception, and learning for multi-robot systems present signicant challenges. Transition dynamics of the robots may be stochastic, making it difficult to select the best action each robot should take at a given time. The observation model, a function of the robots' sensors, may be noisy or partial, meaning that deterministic knowledge of the team's state is often impossible to attain. Robots designed for real-world applications require careful consideration of such sources of uncertainty. This thesis contributes a framework for multi-robot planning in continuous spaces with partial observability. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems. This thesis extends the Dec-POMDP framework to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP), taking advantage of high- level representations that are natural for multi-robot problems. Dec-POSMDPs allow asynchronous decision-making, which is crucial in multi-robot domains. This thesis also presents algorithms for solving Dec-POSMDPs, which are more scalable than previous methods due to use of closed-loop macro-actions in planning. The proposed framework's performance is evaluated in a constrained multi-robot package delivery domain, showing its ability to provide high-quality solutions for large problems. Due to the probabilistic nature of state transitions and observations, robots operate in belief space, the space of probability distributions over all of their possible states. This thesis also contributes a hardware platform called Measurable Augmented Reality for Prototyping Cyber-Physical Systems (MAR-CPS). MAR-CPS allows real-time visualization of the belief space in laboratory settings.en_US
dc.description.statementofresponsibilityby Shayegan Omidshafiei.en_US
dc.format.extent139 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleDecentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actionsen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc939663644en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record