Show simple item record

dc.contributor.advisorRus, Daniela L.
dc.contributor.authorAmini, Alexander Andre
dc.date.accessioned2022-08-29T16:12:31Z
dc.date.available2022-08-29T16:12:31Z
dc.date.issued2022-05
dc.date.submitted2022-06-21T19:15:28.633Z
dc.identifier.urihttps://hdl.handle.net/1721.1/144800
dc.description.abstractBecause the physical world is complex, ambiguous, and unpredictable, autonomous agents must be engineered to exhibit a human-level degree of flexibility and generality — far beyond what we are capable of explicitly programming. Such realizations of autonomy are capable of not only reliably solving a particular problem, but also anticipating what could go wrong in order to strategize, adapt, and continuously learn. Achieving such rich and intricate decision making requires rethinking the foundations of intelligence across all stages of the autonomous learning lifecycle. In this thesis, we develop new learning-based approaches towards dynamic, resilient, and robust decision making of autonomous systems. We advance robust decision making in the wild by addressing critical challenges that arise at all stages, stemming from the data used for training, to the models that learn on this data, to the algorithms to reliably adapt to unexpected events during deployment. We start by exploring how we can computationally design rich, synthetic environments capable of simulating a continuum of hard to collect, out-of-distribution edge-cases, amenable for use during both training and evaluation. Taking this rich data foundation, we then create efficient, expressive learning models together with the algorithms necessary to optimize their representations and overcome imbalances in under-represented and challenging data. Finally, with our trained models, we then turn to the deployment setting where we should still anticipate that our system will be faced with entirely new scenarios that they have never encountered during training. To this end, we develop adaptive and uncertainty-aware algorithms for estimating model uncertainty, and exploiting its presence to realize generalizable decision making, even in the presence of unexpected events.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEnd-to-end Learning for Robust Decision Making
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record