| dc.contributor.advisor | Leslie Pack Kaelbling. | en_US |
| dc.contributor.author | Chang, Yu-Han, Ph. D., Massachusetts Institute of Technology | en_US |
| dc.contributor.other | Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. | en_US |
| dc.date.accessioned | 2006-08-25T18:57:54Z | |
| dc.date.available | 2006-08-25T18:57:54Z | |
| dc.date.copyright | 2005 | en_US |
| dc.date.issued | 2005 | en_US |
| dc.identifier.uri | http://hdl.handle.net/1721.1/33932 | |
| dc.description | Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005. | en_US |
| dc.description | Includes bibliographical references (leaves 165-171). | en_US |
| dc.description.abstract | Systems involving multiple autonomous entities are becoming more and more prominent. Sensor networks, teams of robotic vehicles, and software agents are just a few examples. In order to design these systems, we need methods that allow our agents to autonomously learn and adapt to the changing environments they find themselves in. This thesis explores ideas from game theory, online prediction, and reinforcement learning, tying them together to work on problems in multi-agent learning. We begin with the most basic framework for studying multi-agent learning: repeated matrix games. We quickly realize that there is no such thing as an opponent-independent, globally optimal learning algorithm. Some form of opponent assumptions must be necessary when designing multi-agent learning algorithms. We first show that we can exploit opponents that satisfy certain assumptions, and in a later chapter, we show how we can avoid being exploited ourselves. From this beginning, we branch out to study more complex sequential decision making problems in multi-agent systems, or stochastic games. We study environments in which there are large numbers of agents, and where environmental state may only be partially observable. | en_US |
| dc.description.abstract | (cont.) In fully cooperative situations, where all the agents receive a single global reward signal for training, we devise a filtering method that allows each individual agent to learn using a personal training signal recovered from this global reward. For non-cooperative situations, we introduce the concept of hedged learning, a combination of regret-minimizing algorithms with learning techniques, which allows a more flexible and robust approach for behaving in competitive situations. We show various performance bounds that can be guaranteed with our hedged learning algorithm, thus preventing our agent from being exploited by its adversary. Finally, we apply some of these methods to problems involving routing and node movement in a mobilized ad-hoc networking domain. | en_US |
| dc.description.statementofresponsibility | by Yu-Han Chang. | en_US |
| dc.format.extent | 171 leaves | en_US |
| dc.format.extent | 9090627 bytes | |
| dc.format.extent | 9097798 bytes | |
| dc.format.mimetype | application/pdf | |
| dc.format.mimetype | application/pdf | |
| dc.language.iso | eng | en_US |
| dc.publisher | Massachusetts Institute of Technology | en_US |
| dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
| dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | |
| dc.subject | Electrical Engineering and Computer Science. | en_US |
| dc.title | Approaches to multi-agent learning | en_US |
| dc.type | Thesis | en_US |
| dc.description.degree | Ph.D. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| dc.identifier.oclc | 67547691 | en_US |