Reconfiguration control in adaptive networks
Author(s)
Sigurd, Karin.
DownloadFull printable version (6.221Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.
Advisor
Sanjoy K. Mitter and Jonathan P. How.
Terms of use
Metadata
Show full item recordAbstract
(cont.) In the second setting, we propose a novel total field collision avoidance algorithm of magnetic nature which permits a set of vehicles to reconfigure successively without knowing each other's positions; strategic sensor positioning makes sure that the vehicles do not sense their own fields. Contributions of our research are a multiagent learning algorithm, a unified game theoretic framework for addressing reconfiguration problems, the identification of reconfiguration control as a problem common to several different fields but previously addressed with field-specific methods, the proposal of a definition of robustness in this context and, for the two trajectory planning settings in which our algorithm was implemented, two algorithms for distributed coordination and collision avoidance, respectively. Distributed control systems are emerging as more robust and flexible alternatives to traditional control systems in several mechatronic fields such as satellite control and robotics. Instead of relying on one large unit with a centralized control architecture, one thus uses a parallel structure composed of many simple controllers collectively capable of performing the same task as the large unit. Reconfiguration control involves cooperation, coordination and mutual adaptation and is relevant in a number of engineering problems such as formation control, multiagent learning and role allocation. In addition to being a key issue for using the distributed control paradigm to its full potential, reconfiguration control also offers a well delimited framework for addressing a number of interesting theoretical questions in distributed control such as those related to the overlapping notions of cooperation and coordination. We propose a unified game theoretic approach to the problem of reconfiguration control which interprets node positions as strategies, identifies each configuration with the unique equilibrium of a game and sees reconfigurations as switches of games. Our approach is implemented in two different settings, both related to trajectory planning, and illustrated with simulation results. In the first setting, we propose replicator learning as a multiagent learning algorithm which is a generalization of the replicator dynamics and show convergence in any finite dimension I of the average strategy to any desired strategy as a function of the chosen game matrix. We show how this result can be linked to collective motion in a subspace of Rl-1 resulting in a successive visiting of a set of waypoints.
Description
Includes bibliographical references (p. 159-162). Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Date issued
2003Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.