Convex optimization and machine learning for scalable verification and control
Author(s)
Shen, ShenPh. D.Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Download1227779702-MIT.pdf (6.922Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Russ Tedrake.
Terms of use
Metadata
Show full item recordAbstract
Having scalable verification and control tools is crucial for the safe operation of highly dynamic systems such as complex robots. Yet, most current tools rely on either convex optimization, which enjoys formal guarantees but struggles scalability-wise, or blackbox learning, which has the opposite characteristics. In this thesis, we address these contrasting challenges, individually and then via a rapprochement. First, we present two scale-improving methods for Lyapunov-based system verification via sum-of-squares (SOS) programming. The first method solves compositional and independent small programs to verify large systems by exploiting natural, and weaker than commonly assumed, system interconnection structures. The second method, even more general, introduces novel quotient-ring SOS program reformulations. These programs are multiplier-free, and thus smaller yet stronger; further, they are solved, provably correctly, via a numerically superior finite-sampling. The achieved scale is the largest to our knowledge (on a 32 states robot); in addition, tighter results are computed 2-3 orders of magnitude faster. Next, we introduce one of the first verification frameworks for partially observable systems modeled or controlled by LSTM-type (long short term memory) recurrent neural networks. Two complementary methods are proposed. One introduces novel integral quadratic constraints to bound general sigmoid activations in these networks; the other uses an algebraic sigmoid to, without sacrificing network performances, arrive at far simpler verification programs with fewer, and exact, constraints. Finally, drawing from the previous two parts, we propose SafetyNet, which via a novel search-space and cost design, jointly learns readily-verifiable feedback controllers and rational Lyapunov candidates. While leveraging stochastic gradient descent and over-parameterization, the theory-guided design ensures the learned Lyapunov candidates are positive definite and with "desirable" derivative landscapes, so as to enable direct and "high-quality" downstream verifications. Altogether, SafetyNet produces sample-efficient and certified control policies--overcoming two major drawbacks of reinforcement learning--and can verify systems that are provably beyond the reach of pure convex-optimization-based verifications.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020 Cataloged from student-submitted PDF of thesis. Includes bibliographical references (pages 127-135).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.