dc.contributor.advisor | Michael Ernst | en_US |
dc.contributor.author | Saff, David | en_US |
dc.contributor.author | Boshernitsan, Marat | en_US |
dc.contributor.author | Ernst, Michael D. | en_US |
dc.contributor.other | Program Analysis | en_US |
dc.date.accessioned | 2008-01-15T14:15:58Z | |
dc.date.available | 2008-01-15T14:15:58Z | |
dc.date.issued | 2008-01-14 | en_US |
dc.identifier.other | MIT-CSAIL-TR-2008-002 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/40090 | |
dc.description.abstract | Automated testing during development helps ensure that software works according to the test suite. Traditional test suites verify a few well-picked scenarios or example inputs. However, such example-based testing does not uncover errors in legal inputs that the test writer overlooked. We propose theory-based testing as an adjunct to example-based testing. A theory generalizes a (possibly infinite) set of example-based tests. A theory is an assertion that should be true for any data, and it can be exercised by human-chosen data or by automatic data generation. A theory is expressed in an ordinary programming language, it is easy for developers to use (often even easier than example-based testing), and it serves as a lightweight form of specification. Six case studies demonstrate the utility of theories that generalize existing tests to prevent bugs, clarify intentions, and reveal design problems. | en_US |
dc.format.extent | 10 p. | en_US |
dc.relation | Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory | en_US |
dc.subject | JUnit, testing, partial specification | en_US |
dc.title | Theories in Practice: Easy-to-Write Specifications that Catch Bugs | en_US |