Faster Feedback with AI? A Test Prioritization Study
Author(s)
Mattis, Toni; B?hme, Lukas; Krebs, Eva; Rinard, Martin C.; Hirschfeld, Robert
Download3660829.3660837.pdf (797.2Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Feedback during programming is desirable, but its usefulness depends on immediacy and relevance to the task. Unit and regression testing are practices to ensure programmers can obtain feedback on their changes; however, running a large test suite is rarely fast, and only a few results are relevant.
Identifying tests relevant to a change can help programmers in two ways: upcoming issues can be detected earlier during programming, and relevant tests can serve as examples to help programmers understand the code they are editing.
In this work, we describe an approach to evaluate how well large language models (LLMs) and embedding models can judge the relevance of a test to a change. We construct a dataset by applying faulty variations of real-world code changes and measuring whether the model could nominate the failing tests beforehand.
We found that, while embedding models perform best on such a task, even simple information retrieval models are surprisingly competitive. In contrast, pre-trained LLMs are of limited use as they focus on confounding aspects like coding styles.
We argue that the high computational cost of AI models is not always justified, and tool developers should also consider non-AI models for code-related retrieval and recommendation tasks. Lastly, we generalize from unit tests to live examples and outline how our approach can benefit live programming environments.
Description
‹Programming›Companion ’24, March 11–15, 2024, Lund, Sweden
Date issued
2024-03-11Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science; Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryPublisher
ACM|Companion Proceedings of the 8th International Conference on the Art, Science, and Engineering of Programming
Citation
Mattis, Toni, B?hme, Lukas, Krebs, Eva, Rinard, Martin C. and Hirschfeld, Robert. 2024. "Faster Feedback with AI? A Test Prioritization Study."
Version: Final published version
ISBN
979-8-4007-0634-9
Collections
The following license files are associated with this item: