Monitoring with Limited Information
Author(s)
Iancu, Dan Andrei; Trichakis, Nikolaos; Yoon, Do Young
DownloadAccepted version (789.9Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We consider a system with an evolving state that can be stopped at any time by a decision maker (DM), yielding a state-dependent reward. The DM does not observe the state except for a limited number of monitoring times, which he must choose, in conjunction with a suitable stopping policy, to maximize his reward. Dealing with these types of stopping problems, which arise in a variety of applications from healthcare to finance, often requires excessive amounts of data for calibration purposes and prohibitive computational resources. To overcome these challenges, we propose a robust optimization approach, whereby adaptive uncertainty sets capture the information acquired through monitoring. We consider two versions of the problem—static and dynamic—depending on how the monitoring times are chosen. We show that, under certain conditions, the same worst-case reward is achievable under either static or dynamic monitoring. This allows recovering the optimal dynamic monitoring policy by resolving static versions of the problem. We discuss cases when the static problem becomes tractable and highlight conditions when monitoring at equidistant times is optimal. Lastly, we showcase our framework in the context of a healthcare problem (monitoring heart-transplant patients for cardiac allograft vasculopathy), where we design optimal monitoring policies that substantially improve over the status quo recommendations.
Date issued
2020-10Department
Sloan School of ManagementJournal
Management Science
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Citation
Iancu, Dan Andrei et al. “Monitoring with Limited Information.” Management Science (October 2020) © 2020 The Author(s)
Version: Author's final manuscript
ISSN
0025-1909