MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors

Author(s)
Ahmad, Mak; Ravi, Prerna; Karger, David; Facciotti, Marc
Thumbnail
Download3698205.3729542.pdf (1.723Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
Providing personalized, detailed feedback at scale in large undergraduate STEM courses remains a persistent challenge. We present an empirically evaluated practice exam system that integrates AI generated feedback with targeted textbook references, deployed in a large introductory biology course. Our system specifically aims to encourage metacognitive behavior by asking students to explain their answers and declare their confidence. It uses OpenAI’s GPT4o to generate personalized feedback based on this information, while directing them to relevant textbook sections. Through detailed interaction logs from consenting participants across three midterms (541, 342, and 413 students respectively), totaling 28,313 question-student interactions across 146 learning objectives, along with 279 post-exam surveys and 23 semi-structured interviews, we examined the system’s impact on learning outcomes and student engagement. Analysis showed that across all midterms, the different feedback types showed no statistically significant differences in performance, though there were some trends suggesting potential benefits worth further investigation. The system’s most substantial impact emerged through its required confidence ratings and explanations, which students reported transferring to their actual exam strategies. Approximately 40% of students engaged with textbook references when prompted by feedback—significantly higher than traditional reading compliance rates. Survey data revealed high student satisfaction (M=4.1/5), with 82.1% reporting increased confidence on midterm topics they had practiced, and 73.4% indicating they could recall and apply specific concepts from practice sessions. Our findings demonstrate how thoughtfully designed AIenhanced systems can scale formative assessment while promoting sustainable study practices and self-regulated learning behaviors, suggesting that embedding structured reflection requirements may be more impactful than sophisticated feedback mechanisms.
Description
L@S ’25, Palermo, Italy
Date issued
2025-07-17
URI
https://hdl.handle.net/1721.1/162577
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
ACM|Proceedings of the Twelfth ACM Conference on Learning @ Scale
Citation
Mak Ahmad, Prerna Ravi, David Karger, and Marc Facciotti. 2025. How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors. In Proceedings of the Twelfth ACM Conference on Learning @ Scale (L@S '25). Association for Computing Machinery, New York, NY, USA, 164–175.
Version: Final published version
ISBN
979-8-4007-1291-3

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.