Show simple item record

dc.contributor.advisorMichael Ernsten_US
dc.contributor.authorSaff, Daviden_US
dc.contributor.authorBoshernitsan, Maraten_US
dc.contributor.authorErnst, Michael D.en_US
dc.contributor.otherProgram Analysisen_US
dc.date.accessioned2008-01-15T14:15:58Z
dc.date.available2008-01-15T14:15:58Z
dc.date.issued2008-01-14en_US
dc.identifier.otherMIT-CSAIL-TR-2008-002en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/40090
dc.description.abstractAutomated testing during development helps ensure that software works according to the test suite. Traditional test suites verify a few well-picked scenarios or example inputs. However, such example-based testing does not uncover errors in legal inputs that the test writer overlooked. We propose theory-based testing as an adjunct to example-based testing. A theory generalizes a (possibly infinite) set of example-based tests. A theory is an assertion that should be true for any data, and it can be exercised by human-chosen data or by automatic data generation. A theory is expressed in an ordinary programming language, it is easy for developers to use (often even easier than example-based testing), and it serves as a lightweight form of specification. Six case studies demonstrate the utility of theories that generalize existing tests to prevent bugs, clarify intentions, and reveal design problems.en_US
dc.format.extent10 p.en_US
dc.relationMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratoryen_US
dc.subjectJUnit, testing, partial specificationen_US
dc.titleTheories in Practice: Easy-to-Write Specifications that Catch Bugsen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record