Uncertainty and sensitivity analysis for long-running computer codes : a critical review
Author(s)Langewisch, Dustin R
Massachusetts Institute of Technology. Dept. of Nuclear Science and Engineering.
George E. Apostolakis.
MetadataShow full item record
This thesis presents a critical review of existing methods for performing probabilistic uncertainty and sensitivity analysis for complex, computationally expensive simulation models. Uncertainty analysis (UA) methods reviewed include standard Monte Carlo simulation, Latin Hypercube sampling, importance sampling, line sampling, and subset simulation. Sensitivity analysis (SA) methods include scatter plots, Monte Carlo filtering, regression analysis, variance-based methods (Sobol' sensitivity indices and Sobol' Monte Carlo algorithms), and Fourier amplitude sensitivity tests. In addition, this thesis reviews several existing metamodeling techniques that are intended provide quick-running approximations to the computer models being studied. Because stochastic simulation-based UA and SA rely on a large number (e.g., several thousands) of simulations, metamodels are recognized as a necessary compromise when UA and SA must be performed with long-running (i.e., several hours or days per simulation) computational models. This thesis discusses the use of polynomial Response Surfaces (RS), Artificial Neural Networks (ANN), and Kriging/Gaussian Processes (GP) for metamodeling. Moreover, two methods are discussed for estimating the uncertainty introduced by the metamodel. The first of these methods is based on a bootstrap sampling procedure, and can be utilized for any metamodeling technique.(cont.) The second method is specific to GP models, and is based on a Bayesian interpretation of the underlying stochastic process. Finally, to demonstrate the use of these methods, the results from two case studies involving the reliability assessment of passive nuclear safety systems are presented. The general conclusions of this work are that polynomial RSs are frequently incapable of adequately representing the complex input/output behavior exhibited by many mechanistic models. In addition, the goodness-of- fit of the RS should not be misinterpreted as a measure of the predictive capability of the metamodel, since RSs are necessarily biased predictors for deterministic computer models. Furthermore, the extent of this bias is not measured by standard goodness-of-fit metrics (e.g., coefficient of determination, R 2), so these methods tend to provide overly optimistic indications of the quality of the metamodel. The bootstrap procedure does provide indication as to the extent of this bias, with the bootstrap confidence intervals for the RS estimates generally being significantly wider than those of the alternative metamodeling methods. It has been found that the added flexibility afforded by ANNs and GPs can make these methods superior for approximating complex models. In addition, GPs are exact interpolators, which is an important feature when the underlying computer model is deterministic (i.e., when there is no justification for including a random error component in the metamodel). On the other hand, when the number of observations from the computer model is sufficiently large, all three methods appear to perform comparably, indicating that in such cases, RSs can still provide useful approximations.
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010."February 2010." Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-146).
DepartmentMassachusetts Institute of Technology. Dept. of Nuclear Science and Engineering.
Massachusetts Institute of Technology
Nuclear Science and Engineering.