Show simple item record

dc.contributor.authorAhn, Daehwan
dc.contributor.authorAlmaatouq, Abdullah
dc.contributor.authorGulabani, Monisha
dc.contributor.authorHosanagar, Kartik
dc.date.accessioned2024-06-05T15:18:35Z
dc.date.available2024-06-05T15:18:35Z
dc.date.issued2024-05-11
dc.identifier.isbn979-8-4007-0330-0
dc.identifier.urihttps://hdl.handle.net/1721.1/155192
dc.descriptionCHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 11–16, 2024, Honolulu, HI, USAen_US
dc.description.abstractDespite a rich history of investigating smartphone overuse intervention techniques, AI-based just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We develop Time2Stop, an intelligent, adaptive, and explainable JITAI system that leverages machine learning to identify optimal intervention timings, introduces interventions with transparent AI explanations, and collects user feedback to establish a human-AI loop and adapt the intervention model over time. We conducted an 8-week field experiment (N=71) to evaluate the effectiveness of both the adaptation and explanation aspects of Time2Stop. Our results indicate that our adaptive models significantly outperform the baseline methods on intervention accuracy (>32.8% relatively) and receptivity (>8.0%). In addition, incorporating explanations further enhances the effectiveness by 53.8% and 11.4% on accuracy and receptivity, respectively. Moreover, Time2Stop significantly reduces overuse, decreasing app visit frequency by 7.0 ∼ 8.9%. Our subjective data also echoed these quantitative measures. Participants preferred the adaptive interventions and rated the system highly on intervention time accuracy, effectiveness, and level of trust. We envision our work can inspire future research on JITAI systems with a human-AI loop to evolve with users.en_US
dc.publisherACMen_US
dc.relation.isversionof10.1145/3613904.3642780en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleImpact of Model Interpretability and Outcome Feedback on Trust in AIen_US
dc.typeArticleen_US
dc.identifier.citationAhn, Daehwan, Almaatouq, Abdullah, Gulabani, Monisha and Hosanagar, Kartik. 2024. "Impact of Model Interpretability and Outcome Feedback on Trust in AI."
dc.contributor.departmentSloan School of Management
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-06-01T07:54:01Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-06-01T07:54:01Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record