Show simple item record

dc.contributor.advisorEmilio Frazzoli, Jonathan How and Philip Tokumaru.en_US
dc.contributor.authorRoot, Philip Jen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2014-10-08T15:25:29Z
dc.date.available2014-10-08T15:25:29Z
dc.date.copyright2014en_US
dc.date.issued2014en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/90728
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 205-217).en_US
dc.description.abstractThe majority of persistent patrolling strategies seek to minimize the time between visits or "idleness" of any target or location within an environment in an attempt to locate a hidden adversary as quickly as possible. Such strategies generally fail, however, to consider the game theoretic impacts of the adversary seeking to avoid the patroller's detection. The field of patrolling security games that addresses this two-player game is maturing with several authors posing the patrolling scenario as a leader-follower Stackelberg game where the adversary chooses to attack at a location and time as a best response to the patroller's policy. The state of the art grants the adversary complete global information regarding the patroller's location so as to choose the optimal time and location to attack, and this global information creates a considerable advantage for the adversary. We propose a significant improvement to this patrolling game state of the art by restricting the adversary access to only local information. We model the adversary as capable of collecting a sequence of local observations who must use this information to determine the optimal time to attack. This work proposes to find the optimal patrolling policy in different environments given this adversary model. We extensively study this patrolling game set on a perimeter with extensions to other environments. Teams of patrolling agents following this optimal policy achieve a higher capture probability, and we can determine the marginal improvement for each additional patroller. We pose several novel patrolling techniques inspired by a combination of discrete and continuous random walks, Markov processes, and random walks on Cayley graphs to ultimately model the game equilibrium when the team of patrollers execute so-called "presence patrols." Police and military forces commonly execute this type of patrolling to project their presence across an environment in an effort to deter crime or aggression, and we provide a rigorous analysis of the trade-off between increased patrolling speed and decreased probability of detection.en_US
dc.description.statementofresponsibilityby Philip J. Root.en_US
dc.format.extent217 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titlePersistent patrolling in the presence of adversarial observersen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc891142940en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record