Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/103950.2

Show simple item record

dc.contributor.authorLeino, K. Rustan M.
dc.contributor.authorPit-Claudel, Clement F.
dc.date.accessioned2016-08-17T20:11:11Z
dc.date.available2016-08-17T20:11:11Z
dc.date.issued2016-07
dc.identifier.isbn978-3-319-41527-7
dc.identifier.isbn978-3-319-41528-4
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.urihttp://hdl.handle.net/1721.1/103950
dc.description.abstractSMT-based program verifiers often suffer from the so-called butterfly effect, in which minor modifications to the program source cause significant instabilities in verification times, which in turn may lead to spurious verification failures and a degraded user experience. This paper identifies matching loops (ill-behaved quantifiers causing an SMT solver to repeatedly instantiate a small set of quantified formulas) as a significant contributor to these instabilities, and describes some techniques to detect and prevent them. At their core, the contributed techniques move the trigger selection logic away from the SMT solver and into the high-level verifier: this move allows authors of verifiers to annotate, rewrite, and analyze user-written quantifiers to improve the solver’s performance, using information that is easily available at the source level but would be hard to extract from the heavily encoded terms that the solver works with. The paper demonstrates three core techniques (quantifier splitting, trigger sharing, and matching loop detection) by extending the Dafny verifier with its own trigger selection routine, and demonstrates significant predictability and performance gains on both Dafny’s test suite and large verification efforts using Dafny.en_US
dc.language.isoen_US
dc.publisherSpringer International Publishingen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/978-3-319-41528-4_20en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourcePit-Claudelen_US
dc.titleTrigger Selection Strategies to Stabilize Program Verifiersen_US
dc.typeArticleen_US
dc.identifier.citationLeino, K. R. M., and Clément Pit-Claudel. “Trigger Selection Strategies to Stabilize Program Verifiers.” Lecture Notes in Computer Science Vol. 9779, (2016): 361–381.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorPit-Claudel, Clement F.en_US
dc.relation.journalComputer Aided Verificationen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-1900-3901
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version