In first phase of the project, we develop a framework to quantify and control forecasting errors by deploying statistical inference procedures, combined with a sufficient amount of experimental information to improve the fidelity of a simulation model developed using sound physics or engineering principles. The improvement in fidelity is achievable by reducing model parameter uncertainty and estimating model discrepancy bias. Typically, the sufficiency of experimental information is unknown as it is application-specific, depending on both the simulation model and also on the nature of the experimental data. This project will determine this sufficiency requirement focusing on the forecasting errors which are consistently reduced as available experimental information is increased. The rate of convergence of forecasting predictions to truth is expected to be dependent on the sequence of the experiments, each of which is conducted at various settings.

In the second phase of this proposed work, we investigate this dependency to the sequence of the experiments. We will investigate the use of intelligent design schemes for selecting experimental settings to reach desired forecasting error levels with minimum number of experiments. During fiscal year 2009, Los Alamos National Laboratory (LANL) developed a batch sequential calibration approach. Depending on the first set of physical experiments, this approach selects the optimum settings for the next set of physical experiments [2]. For instance, one convenient way of measuring the ‘improvement’ in the numerical model through calibration is through measuring the changes in the prior and posterior probability density functions, which can be measured according to a relative-entropy criterion (Kullback-Leibler distance). Given the current set of experimental data (x), the next set of experimental points (x*) are selected to maximize the minimum expected information gain. The procedure continues selecting optimal x* in batches until the experimental budget is consumed or a threshold information gain fails to be met. This approach is referred to as Expected Improvement for Predictive Stability (EIPS).

The concept of quantifying and controlling forecasting errors and the method of batch sequential design will be demonstrated in two case studies. The first case study will focus on the Preston-Tonks-Wallace material model for Tantalum, illustrating a data-rich scenario with 142 stress measurements. The second case study will focus on the Visco-Plastic Self-Consistent material model for T91 grade steel, illustrating the application of forecasting metrics in a data-poor scenario with only five strain-rate measurements.