Show simple item record

dc.contributor.advisorFan, Chuchu
dc.contributor.authorDawson, Charles Burke
dc.date.accessioned2024-06-27T19:50:33Z
dc.date.available2024-06-27T19:50:33Z
dc.date.issued2024-05
dc.date.submitted2024-05-28T19:37:17.418Z
dc.identifier.urihttps://hdl.handle.net/1721.1/155397
dc.description.abstractBefore autonomous systems can be deployed in safety-critical environments, we must be able to verify that they will perform safely, ideally without the risk and expense of real-world testing. A wide variety of formal methods and simulation-driven techniques have been developed to solve this verification problem, but they typically rely on difficult-to-construct mathematical models or else use sample-inefficient black-box optimization methods. Moreover, existing verification methods provide little guidance on how to optimize the system's design to be more robust to the failures they uncover. In this thesis, I develop a suite of methods that accelerate verification and design automation of robots and other autonomous systems by using program analysis tools such as automatic differentiation and probabilistic programming to automatically construct mathematical models of the system under test. In particular, I make the following contributions. First, I use automatic differentiation to develop a flexible, general-purpose framework for end-to-end design automation and statistical safety verification for autonomous systems. Second, I improve the sample efficiency of end-to-end optimization using adversarial optimization to falsify differentiable formal specifications of desired robot behavior. Third, I provide a novel reformulation of the design and verification problem using Bayesian inference to predict a more diverse set of challenging adversarial failure modes. Finally, I present a data-driven method for root-cause failure diagnosis, allowing system designers to infer what factors may have contributed to failure based on noisy data from real-world deployments. I apply the methods developed in this thesis to a range of challenging problems in robotics and cyberphysical systems. I demonstrate the use of this design and verification framework to optimize spacecraft trajectory and control systems, multi-agent formation and communication strategies, vision-in-the-loop controllers for autonomous vehicles, and robust generation dispatch for electrical power systems, and I apply this failure diagnosis tool on real-world data from scheduling failures in a nationwide air transportation network.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleBreaking things so you don’t have to: risk assessment and failure prediction for cyber-physical AI
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.orcidhttps://orcid.org/0000-0002-8371-5313
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record