MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • Air Transportation Research
  • International Center for Air Transportation
  • ICAT - Reports and Papers
  • View Item
  • DSpace@MIT Home
  • Air Transportation Research
  • International Center for Air Transportation
  • ICAT - Reports and Papers
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Hazard Avoidance Alerting With Markov Decision Processes

Author(s)
Winder, Lee F.; Kuchar, James K.
Thumbnail
Downloadwinder_icat_report.pdf (1.238Mb)
Metadata
Show full item record
Abstract
This thesis describes an approach to designing hazard avoidance alerting systems based on a Markov decision process (MDP) model of the alerting process, and shows its benefits over standard design methods. One benefit of the MDP method is that it accounts for future decision opportunities when choosing whether or not to alert, or in determining resolution guidance. Another benefit is that it provides a means of modeling uncertain state information, such as knowledge about unmeasurable mode variables, so that decisions are more informed. A mode variable is an index for distinct types of behavior that a system exhibits at different times. For example, in many situations normal system behavior is safe, but rare deviations from the normal increase the likelihood of a harmful incident. Accurate modeling of mode information is needed to minimize alerting system errors such as unnecessary or late alerts. The benefits of the method are illustrated with two alerting scenarios where a pair of aircraft must avoid collisions when passing one another. The first scenario has a fully observable state and the second includes an uncertain mode describing whether an intruder aircraft levels off safely above the evader or is in a hazardous blunder mode. In MDP theory, outcome preferences are described in terms of utilities of different state trajectories. In keeping with this, alerting system requirements are stated in the form of a reward function. This is then used with probabilistic dynamic and sensor models to compute an alerting logic (policy) that maximizes expected utility. Performance comparisons are made between the MDP-based logics and alternate logics generated with current methods. It is found that in terms of traditional performance measures (incident rate and unnecessary alert rate), the MDP-based logic can meet or exceed that of alternate logics.
Date issued
2004-08
URI
http://hdl.handle.net/1721.1/35763
Series/Report no.
ICAT-2004-4
Keywords
hazard avoidance, alerting systems, Markov decision process, air transportation

Collections
  • ICAT - Reports and Papers

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.