Show simple item record

dc.contributor.advisorJames K. Kuchar.en_US
dc.contributor.authorWinder, Lee F. (Lee Francis), 1973-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.en_US
dc.date.accessioned2005-09-27T18:44:26Z
dc.date.available2005-09-27T18:44:26Z
dc.date.copyright2004en_US
dc.date.issued2004en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/28860
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.en_US
dc.descriptionIncludes bibliographical references (p. 123-125).en_US
dc.description.abstract(cont.) (incident rate and unnecessary alert rate), the MDP-based logic can meet or exceed that of alternate logics.en_US
dc.description.abstractThis thesis describes an approach to designing hazard avoidance alerting systems based on a Markov decision process (MDP) model of the alerting process, and shows its benefits over standard design methods. One benefit of the MDP method is that it accounts for future decision opportunities when choosing whether or not to alert, or in determining resolution guidance. Another benefit is that it provides a means of modeling uncertain state information, such as unmeasurable mode variables, so that decisions are more informed. A mode variable is an index for distinct types of behavior that a system exhibits at different times. For example, in many situations normal system behavior tends to be safe, but rare deviations from the normal increase the likelihood of a harmful incident. Accurate modeling of mode information is needed to minimize alerting system errors such as unnecessary or late alerts. The benefits of the method are illustrated with two alerting scenarios where a pair of aircraft must avoid collisions when passing one another. The first scenario has a fully observable state and the second includes an uncertain mode describing whether an intruder aircraft levels off safely above the evader or is in a hazardous blunder mode. In MDP theory, outcome preferences are described in terms of utilities of different state trajectories. In keeping with this, alerting system requirements are stated in the form of a reward function. This is then used with probabilistic dynamic and sensor models to compute an alerting logic (policy) that maximizes expected utility. Performance comparisons are made between the MDP-based logics and alternate logics generated with current methods. It is found that in terms of traditional performance measuresen_US
dc.description.statementofresponsibilityby Lee F. Winder.en_US
dc.format.extent141 p.en_US
dc.format.extent7132602 bytes
dc.format.extent7149737 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectAeronautics and Astronautics.en_US
dc.titleHazard avoidance alerting with Markov decision processesen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc60405943en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record