MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Undergraduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Undergraduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Testing and evaluation of military systems in a high stakes environment

Author(s)
Moyer, Raphael (Raphael E.)
Thumbnail
DownloadFull printable version (7.025Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Mechanical Engineering.
Advisor
Ricardo Valerdi and Warren Seering.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Testing is a critical element of systems engineering, as it allows engineers to ensure that products meet specifications before they go into production. The testing literature, however, has been largely theoretical, and is difficult to apply to real world decisions that testers and program managers face daily. Nowhere is this problem more present than for military systems, where testing is complicated by of a variety of factors like politics and the complexities of military operations. Because of the uniqueness of military systems, the consequences of failure can be very large and thus require special testing considerations, as program managers need to make absolutely sure that the system will not fail. In short, because of the high stakes consequences associated with the development and use of military systems, testers must adjust their testing strategies to ensure that high stakes consequences are adequately mitigated. The high consequence space is broken down into two types of consequences, programmatic and operational. Programmatic consequences occur while a system is under development, and result when insufficient testing is conducted on a system, leading a program manager to have inadequate certainty that the system works to specification. When the program comes under inevitable scrutiny, a lack of testing data makes the program difficult to defend and can thus result in program termination. To address programmatic consequences, testers must utilize a broad based and adaptive test plan that ensures adequate testing across all system attributes, as a failure in any attribute might lead to program termination. To connect programmatic consequences to the realities of system development, the developments of the Division Air Defense System (DIVAD) and the M- 1 Abrams main battle tank are examined in comparative perspective, using testing as an explanation for their dramatically different programmatic outcomes. The DIVAD's testing strategy was not adequate, and the program suffered termination because of public and Congressional criticism; the M- l's strategy, by contrast, was very rigorous, allowing the system to avoid programmatic consequences despite criticism. Operational consequences result from failures of specific attributes during military operations, after the system has already been fielded. Operational consequences are distinguished by their disproportionate impacts at operational and strategic levels of operations, and require targeted testing based on analysis of critical system attributes. The procedure for this analysis is established through use of two case studies. The first case examines a sensor network designed to stop SCUD launches in austere areas; the second case, designed to analyze one system across several missions, conducts an analysis of the potential operational consequences of failures in the Predator drone's system attributes. The following seeks to better define the consequences of system failure with the understanding that the military world is in many ways unique from the civilian world. Implicit in this thesis is a plea for program managers to think carefully before cutting testing time in order to reduce program costs and shorten schedules, because less testing means a higher likelihood of disastrous programmatic consequences and less insurance against operational consequences that can dramatically effect the lives of troops in the field.
Description
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (p. 79-82).
 
Date issued
2010
URI
http://hdl.handle.net/1721.1/59951
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Publisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.

Collections
  • Undergraduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.