dc.contributor.advisor | Jonathan P. How. | en_US |
dc.contributor.author | Üre, Nazim Kemal | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics. | en_US |
dc.date.accessioned | 2015-06-10T19:13:30Z | |
dc.date.available | 2015-06-10T19:13:30Z | |
dc.date.copyright | 2015 | en_US |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/97359 | |
dc.description | Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 129-139). | en_US |
dc.description.abstract | Multiagent planning problems are ubiquitous in engineering. Applications range from control of robotic missions and manufacturing processes to resource allocation and traffic monitoring problems. A common theme in all of these missions is the existence of stochastic dynamics that stem from the uncertainty in the environment and agent dynamics. The combinatorial nature of the problem and the exponential dependency of the planning space on the number of agents render many of the existing algorithms practically infeasible for real-life applications. A standard approach to improve the scalability of planning algorithms is to take advantage of the domain knowledge, such as decomposing the problem to a group of sub-problems and exploiting decouplings among the agents, but such domain knowledge is not always available. In addition, many existing multiagent planning algorithms rely on the existence of a model, but in many real-life situations models are often approximated, wrong, or just unavailable. The convergence rate of the multiagent learning process can be improved by sharing the learned models across the agents. However, many realistic applications involve heterogeneous teams, where the agents have dissimilar transition dynamics. Developing multiagent learning algorithms for such heterogeneous teams is significantly harder, since the learned models cannot be naively transferred across agents. This thesis develops scalable multiagent planning and learning algorithms for heterogeneous teams by using embedded optimization processes to automate the search for decouplings among agents, thus decreasing the dependency on the domain knowledge. Motivated by the low computational complexity and theoretical guarantees of the Bayesian Optimization Algorithm (BOA) as a meta-optimization method for tuning machine learning applications, the developed multiagent planning algorithm, Randomized Coordination Discovery (RCD) extends the BOA to automate the search for finding coordination structures among the agents in Multiagent Markov Decision Processes. The resulting planning algorithm infers how the problem can be decomposed among agents based on the sampled trajectories from the model, without needing any prior domain knowledge or heuristics. In addition, the algorithm is guaranteed to converge under mild assumptions and outperforms the compared multiagent planning methods across different large-scale multiagent planning problems. The multiagent learning algorithms developed in this thesis use adaptive representations and collaborative filtering methods to develop strategies for learning heterogeneous models. The goal of the multiagent learning algorithm is to accelerate the learning process by discovering the similar parts of agents transition models and enable the sharing of these learned models across the team. The proposed multiagent learning algorithms Decentralized Incremental Feature Dependency Discovery (Dec-iFDD) and its extension Collaborative Filtering Dec-iFDD (CF-Dec-iFDD) provide improved scalability and rapid learning for heterogeneous teams without having to rely on domain knowledge and extensive parameter tuning. Each agent learns a linear function approximation of the actual model, and the number of features is increased incrementally to automatically adjust the model complexity based on the observed data. These features are compact representations of the key characteristics in the environment dynamics, so it is these features that are shared between agents, rather than the models themselves. The agents obtain feedback from other agents on the model error reduction associated with the communicated features. Although this process increases the communication cost of exchanging features, it greatly improves the quality/utility of what is being exchanged, leading to improved convergence rate. Finally, the developed planning and learning algorithms are implemented on a variety of hardware flight missions, such as persistent multi-UAV health monitoring and forest fire management scenarios. The experimental results demonstrate the applicability of the proposed algorithms on complex multiagent planning and learning problems. | en_US |
dc.description.statementofresponsibility | by Nazim Kemal Ure. | en_US |
dc.format.extent | 139 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Aeronautics and Astronautics. | en_US |
dc.title | Multiagent planning and learning using random decompositions and adaptive representations | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph. D. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | |
dc.identifier.oclc | 910632816 | en_US |