Show simple item record

dc.contributor.authorAhmad, Mak
dc.contributor.authorRavi, Prerna
dc.contributor.authorKarger, David
dc.contributor.authorFacciotti, Marc
dc.date.accessioned2025-08-28T20:59:14Z
dc.date.available2025-08-28T20:59:14Z
dc.date.issued2025-07-17
dc.identifier.isbn979-8-4007-1291-3
dc.identifier.urihttps://hdl.handle.net/1721.1/162577
dc.descriptionL@S ’25, Palermo, Italyen_US
dc.description.abstractProviding personalized, detailed feedback at scale in large undergraduate STEM courses remains a persistent challenge. We present an empirically evaluated practice exam system that integrates AI generated feedback with targeted textbook references, deployed in a large introductory biology course. Our system specifically aims to encourage metacognitive behavior by asking students to explain their answers and declare their confidence. It uses OpenAI’s GPT4o to generate personalized feedback based on this information, while directing them to relevant textbook sections. Through detailed interaction logs from consenting participants across three midterms (541, 342, and 413 students respectively), totaling 28,313 question-student interactions across 146 learning objectives, along with 279 post-exam surveys and 23 semi-structured interviews, we examined the system’s impact on learning outcomes and student engagement. Analysis showed that across all midterms, the different feedback types showed no statistically significant differences in performance, though there were some trends suggesting potential benefits worth further investigation. The system’s most substantial impact emerged through its required confidence ratings and explanations, which students reported transferring to their actual exam strategies. Approximately 40% of students engaged with textbook references when prompted by feedback—significantly higher than traditional reading compliance rates. Survey data revealed high student satisfaction (M=4.1/5), with 82.1% reporting increased confidence on midterm topics they had practiced, and 73.4% indicating they could recall and apply specific concepts from practice sessions. Our findings demonstrate how thoughtfully designed AIenhanced systems can scale formative assessment while promoting sustainable study practices and self-regulated learning behaviors, suggesting that embedding structured reflection requirements may be more impactful than sophisticated feedback mechanisms.en_US
dc.publisherACM|Proceedings of the Twelfth ACM Conference on Learning @ Scaleen_US
dc.relation.isversionofhttps://doi.org/10.1145/3698205.3729542en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleHow Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviorsen_US
dc.typeArticleen_US
dc.identifier.citationMak Ahmad, Prerna Ravi, David Karger, and Marc Facciotti. 2025. How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors. In Proceedings of the Twelfth ACM Conference on Learning @ Scale (L@S '25). Association for Computing Machinery, New York, NY, USA, 164–175.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:01:11Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:01:11Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record