Situational Cueing for Trust Calibration in Automated Systems
Author(s)
Forsey-Smerek, Alexandra M.
DownloadThesis PDF (8.723Mb)
Advisor
Newman, Dava J.
Shah, Julie A.
Terms of use
Metadata
Show full item recordAbstract
Appropriately calibrated human trust is essential for safe and successful interactions between humans and automation. While undertrust in a system can lead to system disuse and suboptimal task performance, overtrust in a system can result in reduced user situation awareness and susceptibility to consequences of system failure. In dynamic domains, fluctuations in automation performance demand that user trust adapts appropriately. Recent attention has been focused on the presentation of trust cues as an interruptive behavioral intervention method to assist users in appropriate trust calibration in domains where system transparency information alone does not suffice. This thesis expands the application space of trust cues through the presentation and experimental evaluation of a novel trust cue method, situational trust cues (STCs). In the STCs framework, cues are presented if a situational update such as a change in the environmental conditions or task type significantly effects how the user should trust the automated system. Theory behind the presentation, design, and effectiveness of STCs is presented.
STCs were experimentally validated in an in-person experiment with 64 participants to investigate the effectiveness of STCs in mitigating user overtrust and undertrust in automation in a dynamic mission operations environment. In general, participants reported that STCs were helpful but not required. Additional findings highlighted negative consequences of inappropriately presenting trust cues too frequently on the perceived utility of cues, suggesting appropriate presentation of trust cues of any type is critical for cues to retain their impact on behavior. Additionally, a post-hoc analysis of participant strategy on interacting with the automated system uncovered significant effects of participant mental model inaccuracies and individual biases on appropriate trust calibration. While limitations of the experiment administration prevented further conclusions about the effects of STCs on trust calibration, findings lay a clear path for promising future work on the evaluation of STCs and provide valuable context for design of trust cues of all types.
Date issued
2022-09Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsPublisher
Massachusetts Institute of Technology