Show simple item record

dc.contributor.authorSheridan, Thomas B.
dc.date.accessioned2021-02-12T22:19:36Z
dc.date.available2021-02-12T22:19:36Z
dc.date.issued2019-05
dc.date.submitted2018-08
dc.identifier.issn1664-1078
dc.identifier.urihttps://hdl.handle.net/1721.1/129761
dc.description.abstractComputer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.en_US
dc.language.isoen
dc.publisherFrontiers Media SAen_US
dc.relation.isversionofhttp://dx.doi.org/10.3389/fpsyg.2019.01117en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceFrontiersen_US
dc.titleIndividual Differences in Attributes of Trust in Automation: Measurement and Application to System Designen_US
dc.typeArticleen_US
dc.identifier.citationSheridan, Thomas B. et al. "Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design." Frontiers in Psychology (May 2019): 1117 © 2019 Sheridanen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journalFrontiers in Psychologyen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-07-18T12:08:06Z
dspace.date.submission2019-07-18T12:08:11Z
mit.journal.volume10en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record