MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Approaches to multi-agent learning

Author(s)
Chang, Yu-Han, Ph. D., Massachusetts Institute of Technology
Thumbnail
DownloadFull printable version (9.799Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Advisor
Leslie Pack Kaelbling.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Systems involving multiple autonomous entities are becoming more and more prominent. Sensor networks, teams of robotic vehicles, and software agents are just a few examples. In order to design these systems, we need methods that allow our agents to autonomously learn and adapt to the changing environments they find themselves in. This thesis explores ideas from game theory, online prediction, and reinforcement learning, tying them together to work on problems in multi-agent learning. We begin with the most basic framework for studying multi-agent learning: repeated matrix games. We quickly realize that there is no such thing as an opponent-independent, globally optimal learning algorithm. Some form of opponent assumptions must be necessary when designing multi-agent learning algorithms. We first show that we can exploit opponents that satisfy certain assumptions, and in a later chapter, we show how we can avoid being exploited ourselves. From this beginning, we branch out to study more complex sequential decision making problems in multi-agent systems, or stochastic games. We study environments in which there are large numbers of agents, and where environmental state may only be partially observable.
 
(cont.) In fully cooperative situations, where all the agents receive a single global reward signal for training, we devise a filtering method that allows each individual agent to learn using a personal training signal recovered from this global reward. For non-cooperative situations, we introduce the concept of hedged learning, a combination of regret-minimizing algorithms with learning techniques, which allows a more flexible and robust approach for behaving in competitive situations. We show various performance bounds that can be guaranteed with our hedged learning algorithm, thus preventing our agent from being exploited by its adversary. Finally, we apply some of these methods to problems involving routing and node movement in a mobilized ad-hoc networking domain.
 
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
 
Includes bibliographical references (leaves 165-171).
 
Date issued
2005
URI
http://hdl.handle.net/1721.1/33932
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.