Impact of Model Interpretability and Outcome Feedback on Trust in AI
Author(s)
Ahn, Daehwan; Almaatouq, Abdullah; Gulabani, Monisha; Hosanagar, Kartik
Download3613904.3642780.pdf (3.644Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Despite a rich history of investigating smartphone overuse intervention techniques, AI-based just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We develop Time2Stop, an intelligent, adaptive, and explainable JITAI system that leverages machine learning to identify optimal intervention timings, introduces interventions with transparent AI explanations, and collects user feedback to establish a human-AI loop and adapt the intervention model over time. We conducted an 8-week field experiment (N=71) to evaluate the effectiveness of both the adaptation and explanation aspects of Time2Stop. Our results indicate that our adaptive models significantly outperform the baseline methods on intervention accuracy (>32.8% relatively) and receptivity (>8.0%). In addition, incorporating explanations further enhances the effectiveness by 53.8% and 11.4% on accuracy and receptivity, respectively. Moreover, Time2Stop significantly reduces overuse, decreasing app visit frequency by 7.0 ∼ 8.9%. Our subjective data also echoed these quantitative measures. Participants preferred the adaptive interventions and rated the system highly on intervention time accuracy, effectiveness, and level of trust. We envision our work can inspire future research on JITAI systems with a human-AI loop to evolve with users.
Description
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 11–16, 2024, Honolulu, HI, USA
Date issued
2024-05-11Department
Sloan School of ManagementPublisher
ACM
Citation
Ahn, Daehwan, Almaatouq, Abdullah, Gulabani, Monisha and Hosanagar, Kartik. 2024. "Impact of Model Interpretability and Outcome Feedback on Trust in AI."
Version: Final published version
ISBN
979-8-4007-0330-0
Collections
The following license files are associated with this item: