Show simple item record

dc.contributor.advisorRicardo Valerdi and Warren Seering.en_US
dc.contributor.authorMoyer, Raphael (Raphael E.)en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Mechanical Engineering.en_US
dc.date.accessioned2010-11-08T17:50:49Z
dc.date.available2010-11-08T17:50:49Z
dc.date.copyright2010en_US
dc.date.issued2010en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/59951
dc.descriptionThesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 79-82).en_US
dc.description.abstractTesting is a critical element of systems engineering, as it allows engineers to ensure that products meet specifications before they go into production. The testing literature, however, has been largely theoretical, and is difficult to apply to real world decisions that testers and program managers face daily. Nowhere is this problem more present than for military systems, where testing is complicated by of a variety of factors like politics and the complexities of military operations. Because of the uniqueness of military systems, the consequences of failure can be very large and thus require special testing considerations, as program managers need to make absolutely sure that the system will not fail. In short, because of the high stakes consequences associated with the development and use of military systems, testers must adjust their testing strategies to ensure that high stakes consequences are adequately mitigated. The high consequence space is broken down into two types of consequences, programmatic and operational. Programmatic consequences occur while a system is under development, and result when insufficient testing is conducted on a system, leading a program manager to have inadequate certainty that the system works to specification. When the program comes under inevitable scrutiny, a lack of testing data makes the program difficult to defend and can thus result in program termination. To address programmatic consequences, testers must utilize a broad based and adaptive test plan that ensures adequate testing across all system attributes, as a failure in any attribute might lead to program termination. To connect programmatic consequences to the realities of system development, the developments of the Division Air Defense System (DIVAD) and the M- 1 Abrams main battle tank are examined in comparative perspective, using testing as an explanation for their dramatically different programmatic outcomes. The DIVAD's testing strategy was not adequate, and the program suffered termination because of public and Congressional criticism; the M- l's strategy, by contrast, was very rigorous, allowing the system to avoid programmatic consequences despite criticism. Operational consequences result from failures of specific attributes during military operations, after the system has already been fielded. Operational consequences are distinguished by their disproportionate impacts at operational and strategic levels of operations, and require targeted testing based on analysis of critical system attributes. The procedure for this analysis is established through use of two case studies. The first case examines a sensor network designed to stop SCUD launches in austere areas; the second case, designed to analyze one system across several missions, conducts an analysis of the potential operational consequences of failures in the Predator drone's system attributes. The following seeks to better define the consequences of system failure with the understanding that the military world is in many ways unique from the civilian world. Implicit in this thesis is a plea for program managers to think carefully before cutting testing time in order to reduce program costs and shorten schedules, because less testing means a higher likelihood of disastrous programmatic consequences and less insurance against operational consequences that can dramatically effect the lives of troops in the field.en_US
dc.description.statementofresponsibilityby Raphael Moyer.en_US
dc.format.extent82 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectMechanical Engineering.en_US
dc.titleTesting and evaluation of military systems in a high stakes environmenten_US
dc.typeThesisen_US
dc.description.degreeS.B.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineering
dc.identifier.oclc676947075en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record