dc.description.abstract | Because the physical world is complex, ambiguous, and unpredictable, autonomous agents must be engineered to exhibit a human-level degree of flexibility and generality — far beyond what we are capable of explicitly programming. Such realizations of autonomy are capable of not only reliably solving a particular problem, but also anticipating what could go wrong in order to strategize, adapt, and continuously learn. Achieving such rich and intricate decision making requires rethinking the foundations of intelligence across all stages of the autonomous learning lifecycle.
In this thesis, we develop new learning-based approaches towards dynamic, resilient, and robust decision making of autonomous systems. We advance robust decision making in the wild by addressing critical challenges that arise at all stages, stemming from the data used for training, to the models that learn on this data, to the algorithms to reliably adapt to unexpected events during deployment. We start by exploring how we can computationally design rich, synthetic environments capable of simulating a continuum of hard to collect, out-of-distribution edge-cases, amenable for use during both training and evaluation. Taking this rich data foundation, we then create efficient, expressive learning models together with the algorithms necessary to optimize their representations and overcome imbalances in under-represented and challenging data. Finally, with our trained models, we then turn to the deployment setting where we should still anticipate that our system will be faced with entirely new scenarios that they have never encountered during training. To this end, we develop adaptive and uncertainty-aware algorithms for estimating model uncertainty, and exploiting its presence to realize generalizable decision making, even in the presence of unexpected events. | |