Assessing the performance of human-automation collaborative planning systems
Author(s)
Ryan, Jason C. (Jason Christopher)
DownloadFull printable version (25.47Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.
Advisor
Mary L. Cummings.
Terms of use
Metadata
Show full item recordAbstract
Planning and Resource Allocation (P/RA) Human Supervisory Control (HSC) systems utilize the capabilities of both human operators and automated planning algorithms to schedule tasks for complex systems. In these systems, the human operator and the algorithm work collaboratively to generate new scheduling plans, each providing a unique set of strengths and weaknesses. A systems engineering approach to the design and assessment of these P/RA HSC systems requires examining each of these aspects individually, as well as examining the performance of the system as a whole in accomplishing its tasks. An obstacle in this analysis is the lack of a standardized testing protocol and a standardized set of metric classes that define HSC system performance. An additional issue is the lack of a comparison point for these revolutionary systems, which must be validated with respect to current operations before implementation. This research proposes a method for the development of test metrics and a testing protocol for P/RA HSC systems. A representative P/RA HSC system designed to perform high-level task planning for deck operations on United States Naval aircraft carriers is utilized in this testing program. Human users collaborate with the planning algorithm to generate new schedules for aircraft and crewmembers engaged in carrier deck operations. A metric class hierarchy is developed and used to create a detailed set of metrics for this system, allowing analysts to detect variations in performance between different planning configurations and to depict variations in performance for a single planner across levels of environment complexity. In order to validate this system, these metrics are applied in a testing program that utilizes three different planning conditions, with a focus on validating the performance of the combined Human-Algorithm planning configuration. Experimental result analysis revealed that the experimental protocol was successful in providing points of comparison for planners within a given scenario while also being able to explain the root causes of variations in performance between planning conditions. The testing protocol was also able to provide a description of relative performance across complexity levels. The results demonstrate that the combined Human-Algorithm planning condition performed poorly for simple and complex planning conditions, due to errors in the recognition of a transient state condition and in modeling the effects of certain actions, respectively. The results also demonstrate that Human planning performance was relatively consistent as complexity increased, while combined Human-Algorithm planning was effective only in moderate complexity levels. Although the testing protocol used for these scenarios and this planning algorithm was effective, several limiting factors should be considered. Further research must address how the effectiveness of the defined metrics and the test methodology changes as different types of planning algorithms are utilized and as a larger number of human test subjects are incorporated.
Description
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011. Cataloged from PDF version of thesis. Includes bibliographical references (p. 215-221).
Date issued
2011Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsPublisher
Massachusetts Institute of Technology
Keywords
Aeronautics and Astronautics.