Evaluation Criteria for Human-Automation Performance Metrics
Author(s)
Pina, Patricia Elena; Cummings, M. L.; Donmez, Birsen
DownloadCummings_Evaluation Criteria.pdf (522.6Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
Previous research has identified broad metric classes for human-automation performance to facilitate metric selection, as well as understanding and comparison of research results. However, there is still lack of an objective method for selecting the most efficient set of metrics. This research identifies and presents a list of evaluation criteria that can help determine the quality of a metric in terms of experimental constraints, comprehensive understanding, construct validity, statistical efficiency, and measurement technique efficiency. Future research will build on these evaluation criteria and existing generic metric classes to develop a cost-benefit analysis approach that can be used for metric selection.
Date issued
2008-01Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
ACM Workshop on Performance Metrics for Intelligent Systems
Publisher
Association for Computing Machinery
Citation
Donmez, Birsen, Patricia E. Pina, and M. L. Cummings. “Evaluation criteria for human-automation performance metrics.” Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems. Gaithersburg, Maryland: ACM, 2008. 77-82. c2008 Association for Computing Machinery
Version: Author's final manuscript
ISBN
978-1-60558-293-1
Keywords
Metric Quality, Human Supervisory Control, Validity, Statistics, Experiments