Show simple item record

dc.contributor.advisorRinard, Martin C.
dc.contributor.authorJia, Kai
dc.date.accessioned2022-01-14T14:43:20Z
dc.date.available2022-01-14T14:43:20Z
dc.date.issued2021-06
dc.date.submitted2021-06-24T19:22:52.677Z
dc.identifier.urihttps://hdl.handle.net/1721.1/138993
dc.description.abstractDeep neural networks have achieved great success on many tasks and even surpass human performance in certain settings. Despite this success, neural networks are known to be vulnerable to the problem of adversarial inputs, where small and human- imperceptible changes in the input cause large and unexpected changes in the output. This problem motivates the development of neural network verification techniques that aspire to verify that a given neural network produces stable predictions for all inputs in a perturbation space around a given input. However, many existing verifiers target floating point networks but, for efficiency reasons, do not exactly model the floating point computation. As a result, they may produce incorrect results due to floating point error. In this context, Binarized Neural Networks (BNNs) are attractive because they work with quantized inputs and binarized internal activation and weight values and thus support verification free of floating point error. The binarized computation of BNNs directly corresponds to logical reasoning. BNN verification is, therefore, typically formulated as a Boolean satisfiability (SAT) problem. This formulation involves numerous reified cardinality constraints. Previous work typically converts such constraints to conjunctive normal form to be solved by an off-the-shelf SAT solver. Unfortunately, previous BNN verifiers are significantly slower than floating point network verifiers. Moreover, there is a dearth of prior research that aspires to train robust BNNs. This thesis presents techniques for ensuring neural network robustness against input perturbations and checking safety properties that require a network to produce certain outputs for a set of inputs. We present four contributions: (i) new techniques that improve BNN verification performance by thousands of times compared to the best previous verifiers for either binarized or floating point neural networks; (ii) the first technique for training robust BNNs; previous robust training techniques are designed to work with floating point networks and do not produce robust BNNs; (iii) a new method that exploits floating point errors to produce witnesses for the unsoundness of verifiers that target floating point networks but do not exactly model 3floating point arithmetic; and (iv) a new technique for efficient and exact verification of neural networks with low dimensional inputs. Our first contribution comprises two novel techniques to improve BNN verification performance: (i) extending the SAT solver to handle reified cardinality constraints natively and efficiently; and (ii) novel training strategies that produce BNNs that verify more efficiently. Our second contribution is a new training technique for training BNNs that achieve verifiable robustness comparable to floating point networks. We present an algorithm that adaptively tunes the gradient computation in PGD-based BNN adversarial train- ing to improve the robustness. We demonstrate the effectiveness of the methods in the first two contributions by presenting the first exact verification results for adversarial robustness of nontrivial convolutional BNNs on the widely used MNIST and CIFAR10 datasets. No previous BNN verifiers can handle these tasks. Compared to previous (potentially incorrect) exact verification of floating point networks of the same architectures on the same tasks, our system verifies BNNs hundreds to thousands of times faster and delivers comparable verifiable accuracy in most cases. Our third contribution shows that the failure to take floating point error into ac- count leads to incorrect verification that can be systematically exploited. We present a method that efficiently searches inputs as witnesses for the incorrectness of robust- ness claims made by a complete verifier regarding a pretrained neural network. We also show that it is possible to craft neural network architectures and weights that cause an unsound incomplete verifier to produce incorrect verification results. Our fourth contribution shows that the idea of quantization also facilitates the verification of floating point networks. Specifically, we consider exactly verifying safety properties for floating point neural networks used for a low dimensional airborne avoidance control system. Prior work, which analyzes the internal computations of the network, is inefficient and potentially incorrect because it does not soundly model floating point arithmetic. We instead prepend an input quantization layer to the original network. Our experiments show that our modification delivers similar runtime accuracy while allowing correct and significantly easier and faster verification by input state space enumeration.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleTowards Reliable AI via Efficient Verification of Binarized Neural Networks
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.orcidhttps://orcid.org/0000-0001-8215-9899
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record