MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design

Author(s)
Sheridan, Thomas B.
Thumbnail
DownloadPublished version (468.9Kb)
Terms of use
Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.
Date issued
2019-05
URI
https://hdl.handle.net/1721.1/129761
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering; Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Journal
Frontiers in Psychology
Publisher
Frontiers Media SA
Citation
Sheridan, Thomas B. et al. "Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design." Frontiers in Psychology (May 2019): 1117 © 2019 Sheridan
Version: Final published version
ISSN
1664-1078

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.