Presentation Details 

Adhikari, Sondipon 
Elliptic stochastic partial differential equations: An orthonormal vector basis approach 
View Abstract
Hide Abstract

The stochastic finite element analysis of elliptic type partial differential equations are considered. An alternative approach by projecting the solution of the discretized equation into a finite dimensional orthonormal vector basis is investigated. It is shown that the solution can be obtained using a finite series comprising functions of random variables and orthonormal vectors. These functions, called as the spectral functions, can be expressed in terms of the spectral properties of the deterministic coefficient matrices arising due to the discretization of the governing partial differential equation. An explicit relationship between these functions and polynomial chaos functions has been derived. Based on the projection in the orthonormal vector basis, a Galerkin error minimization approach is proposed. The constants appearing in the Galerkin method are solved from a system of linear equations which has the same dimension as the original discretized equation. A hybrid analytical and simulation based computational approach is proposed to obtain the moments and pdf of the solution. The method is illustrated using the stochastic nanomechanics of a Zinc Oxide (ZnO) nanowire deflected under the atomic force microscope (AFM) tip. The results are compared with the direct Monte Carlo simulation results for different correlation lengths and strengths of randomness. 
Allaire, Douglas 
A BayesianBased Approach to MultiFidelity Multidisciplinary Design Optimization 
View Abstract
Hide Abstract

We present a novel method for fidelity management in multidisciplinary design optimization. The method is Bayesianbased and employs maximum entropy characterizations of model uncertainties that can be established via expert opinion or historical data. Our approach incorporates global sensitivity analysis to rigorously apportion variation in performance parameters associated with critical design constraints to individual disciplines. This provides a means of determining, with confidence, when low, medium, and high fidelity models need to be incorporated in the design process. Our method is demonstrated on wing sizing problem for a high altitude, long endurance vehicle. A critical chance constraint is placed on the maximum wing deflection, which is computed empirically in a low fidelity model and is governed by the EulerBernoulli beam equation in a medium fidelity model. 
Allen, Ed 
Derivation of SPDEs for randomly varying problems in physics, biology, or finance 
View Abstract
Hide Abstract

A straightforward procedure is explained for deriving stochastic partial differential equations (SPDEs) for randomlyvarying problems in biology, physics, or finance. The SPDEs are derived from basic principles, i.e., from the changes in the system which occur in a small time interval. In the derivation procedure, a discrete stochastic model is first constructed. As the time interval decreases, the discrete stochastic model leads to a system of Ito stochastic differential equations. Next, Brownian sheets replace the Wiener processes in the SDE system. As intervals of the secondary discrete variables decrease, stochastic partial differential equations are derived. The derivation procedure is illustrated for several examples where SPDEs are derived for size and agestructured populations, stockprice distributions, neutron transport, and reactiondiffusion systems.
AUTHORS: Edward Allen, Elife Dogan, Xiaoyi Ji, Department of Mathematics and Statistics, Texas Tech University 
Barth, Tim 
Propagation of Statistical Model Parameter Uncertainty in Compressible Flow Simulations 
View Abstract
Hide Abstract

We consider the deterministic propagation of statistical model parameter uncertainties in numerical approximations of nonlinear systems of conservation laws discretized using finite elements and finite volumes. To deterministically calculate the propagation of model parameter uncertainty, stochastic independent dimensions are introduced and discretized, see for example Ghamen [1] or Xiu and Karniadakis [2]. Of particular interest are nonlinear conservation laws that admit discontinuities in both physical and stochastic dimensions. The presence of discontinuities in stochastic dimensions makes the use of standard highorder polynomial chaos approximation spaces problematic. To approximate solutions containing discontinuities, adaptive piecewise polynomial spaces are utilized and implemented using a parallel model of computation.
Specific application areas of interest include the compressible NavierStokes equations with (1) PDE turbulence models and (2) finiterate chemistry models for a nitrogenoxygen atmosphere. As a practical matter, these calculations are often faced with many sources of uncertainty including empirical equations of state, initial and boundary data, turbulence models, chemical kinetics models, catalysis models, radiation models, and many others.
Example uncertainty calculations of subsonic, transonic, and chemically reacting hypersonic flow are presented to
illustrate the utility of the present numerical methods.
[1] R.G. Ghamen, Ingredients for a General Purpose Stochastic Finite
Element Formulation, CMAME, Vol. 168, 1999.
[2] D. Xiu and G. Karniadakis, Modeling Uncertainty in Flow Simulation
via Generalized Polynomial Chaos, JCP, Vol. 187, 2002. 
Boyaval, Sebastien 
The reducedbasis method for uncertainty quantification 
View Abstract
Hide Abstract

We will recall the principles of the reducedbasis method (Maday, Patera et al.) and show how to use it for the efficient computation of noise/uncertainty propagation in PDEs, in particular for the acceleration of MonteCarlo computations. 
Burch, Nathanial 
Sensitivity analysis for solutions of elliptic PDEs on domains with randomly perturbed boundaries 
View Abstract
Hide Abstract

We study the problem of solving an elliptic partial differential equation (PDE) posed on a domain in which the physical boundary is perturbed randomly. The goal is to accurately approximate the probability density for a given quantity of interest computed from the solution while estimating and controlling the various sources of error, e.g. from finite sampling and finite element discretization. We describe how to transform the given problem into an elliptic problem on a fixed domain with a randomly perturbed elliptic coefficient and then apply a recently developed fast method for carrying out sensitivity analysis on the latter. We use a posteriori analysis to account for the effects of the transformation, various deterministic discretization errors, and finite sampling. 
Burkardt, John 
Sparse Grids for Anisotropic Problems 
View Abstract
Hide Abstract

The classical sparse grid algorithm is often constructed from a sequence of ClenshawCurtis rulesof exponentially increasing order. A set of sparse grids is produced, with an index called the ``level''. The grids are isotropic, treating each dimension the same. Each time the level
is incremented, the precision of the grid is increased in every dimension.
We consider modifications of the classical algorithm which allow each dimension to have a distinct geometry, weight function, and quadrature rule. Moreover, the user may assign an importance to each dimension, so that some dimensions are more thoroughly gridded than others.
Much of this presentation will involve the display of accuracy plots showing which monomials a sparse grid can integrate exactly. This suggests an adaptive approach, in which we consider adding ``nearby'' monomials to the current accuracy plot to improve the results. 
Busetto, Alberto Giovanni 
Active Uncertainty Reduction for Dynamical Systems 
View Abstract
Hide Abstract

We present an active uncertainty reduction approach for nonlinear models of dynamical systems. Our informationtheoretic approach is based on the maximization of the expected information gain, quantified in terms of relative entropy between model priors and posteriors. It enables the identification of the feasible and maximally informative subsets of measurable quantities, of time points and of interventions. We show that approximate solutions can be efficiently obtained with submodular optimization. In conclusion, applications to systems biology are experimentally demonstrated and discussed. 
Cao, Yanzhao 
Sparse grid collocation method for stochastic integral equations 
View Abstract
Hide Abstract

we develop the fast collocation methods for the second kind integral equations with stochastic loading.The sparsegrid multiscale bases are constructed, associated with which a truncation strategy is proposed so that the computational complexity is reduced to be linear up to a logarithmic factor. The convergence rate is preserved after the truncations. 
Challenor, Peter 
Using emulators to account for uncertainty in climate models 
View Abstract
Hide Abstract

Large complex computer models are necessary if we are to make predictions about the climate in the longer term. Such models are deterministic but we know that climate projection is rife with uncertainty. This uncertainty can be divided into structural uncertainty, our models are imperfect, and input uncertainty, we do not know the model inputs. The later can be dealt with by sampling from the uncertainty in the inputs and propagating the error through the model. Because the model is very expensive to run so we cannot use naïve Monte Carlo methods to carry out this error propagation. To avoid the problem of running the model many thousands of times we use emulators, or surrogate models. Basically we build a statistical model (a Gaussian process model) of the deterministic dynamical climate simulator. Using the risk of the collapse of the Atlantic overturning circulation as an illustration I will outline how we analyse such models. I will also present some ideas on how we might tackle the structural uncertainty problem. 
Charrier, Julia 
A weak error estimate for the solution of an elliptic partial differential equation with random coefficients 
View Abstract
Hide Abstract

In order to compute the law of the solution of elliptic partial differential equations with random coefficients, a Galerkin finite elements method or a stochastic collocation method can be used. These two methods are based on a finite dimensional approximation of the stochastic coefficients, which can be achieved via a KarhunenLoève expansion. This work proposes an estimate for the weak error on the solution, resulting from the approximation of the coefficients, in the case of a homogeneous lognormal coefficient, which is often used in hydrogeology. The weak error estimate improves the strong error estimate and is illustrated numerically. We emphasize the influence of the correlation length. 
Christie, Mike 
Uncertainty Quantification in Reservoir Modelling 
View Abstract
Hide Abstract

Uncertainty quantification is important in the oil industry because multimillion dollar decisions are taken in developing and operating oilfields without complete knowledge of the reservoir. The lack of certainty in the production from a new well for example may be enough to take the decision from economically favourable to economically unfavourable.
Uncertainty in reservoir simulation is generally quantified by calibrating large reservoir simulation codes to observed data. The unknown properties that need to be inferred are porosities and permeabilities, transmissibilities across faults etc. The number of unknown parameters can be large, and the computer codes themselves can be expensive to run, so there is a drive for efficient ways of generating wellcalibrated models.
The talk will describe recent algorithmic developments in calibrating reservoir models to data, and illustrate key features of the algorithms by application to benchmark datasets. 
D'Elia, Marta 
A data assimilation technique for including noisy velocity measurements into NavierStokes simulations 
View Abstract
Hide Abstract

The integration of data and numerical simulations has always been a relevant issue in fluidgeophysical studies. The improvement of measurement and imaging devices makes available a huge amount of data for the cardiovascular system; these data can be used in numerical simulations not only for validation, but also for improving reliability of the results. Cardiovascular mathematics is an emerging field of scientific computing, still presenting a lot of challenges; the combination of measurements and governing principles (Data Assimilation, DA) is one of them.
In this work we propose a DA technique for including noisy measurements of the blood velocity into the simulation of the NavierStokes (NS) equations. This technique is formulated as an inverse problem solved with a Discretize then Optimize technique, where space discretization is performed with the Finite Element (FE) method. Starting from a method of misfit minimization between data and recovered velocity, designed for the Oseen problem, we show how to solve the NS system.
Numerical results for tests cases on 2D domains are presented using noisefree and noisy data; in the case of noisy data, we investigate the dependence of the discretization error on the amount of noise, the number and the displacement of noisy measurements and the FE discretization step. 
Dashti, Masoumeh 
Bayesian Approach to an Elliptic Inverse Problem 
View Abstract
Hide Abstract

We consider the inverse problem of determining the permeability from the pressure in a Darcy model of flow in a porous medium. Mathematically the problem is to find the diffusion coefficient for a linear uniformly elliptic partial differential equation in divergence form, in a bounded domain in two or three dimensions, from pointwise measurements of the solution in the interior.
We adopt a Bayesian approach to the problem. We place a prior Gaussian random field measure on the log permeability, specified through its two point correlation function. We study the regularity of functions drawn from this prior measure, by use of the KarhunenLoeve expansion. We also study the Lipschitz properties of the observation operator mapping the log permeability to the observations. Assuming that the observations are subject to mean zero noise, and combining the aforementioned regularity and continuity estimates, we show that the posterior measure is welldefined. Furthermore the posterior measure is shown to be Lipschitz continuous with respect to the data in the Hellinger and total variation metrics, giving rise to a form of wellposedness of the inverse problem. This is joint work with Andrew Stuart. 
El Moselhy, Tarek 
A Dominant Singular Vectors Approach for Stochastic Partial Differential Equations 
View Abstract
Hide Abstract

Many stochastic partial differential equations (SPDEs) which appear in practical engineering applications are discretized using a large dimensional spatial basis and a large dimensional stochastic basis. Unfortunately, the complexity of solving such SPDEs using standard stochastic algorithms is prohibitive. To overcome such complexities, a variety of new stochastic algorithms based on model order reduction ideas have been proposed. Such algorithms typically rely on finding the most relevant small dimensional subspace in which the solution can be accurately represented without a significant loss of accuracy.
We present a new intrusive simulation approach which relies on sequentially finding dominant bases for both the spatial and stochastic spaces in order to represent the solution. The main computational advantage of the proposed approach stems from the fact that at every step of the algorithm only two dominant bases, one in the spatial space and another in the stochastic space, are simultaneously computed. The two dominant bases are computed such that the equation residual is minimized. After each step both the solution and the equation residuals are updated. The algorithm is terminated when the norm of the equation residual is sufficiently small. Consequently, meaningful error measures are provided which determine the quality of the solution. Furthermore, we provide a detailed convergence analysis of our approach.
Our approach has been applied to a variety of stochastic partial differential equations. In particular, our approach is used to solve Maxwell’s equations in large domains described by random boundaries. Such SPDEs are particularly important to quantify the effect of integrated circuit manufacturing uncertainties on the electrical characteristics of interconnect structures. Additionally, our algorithm is applied to a diffusion problem in which the permeability is described by a lognormal random field. In both cases, our algorithm provides orders of magnitude reduction in computational time and memory compared to many stateoftheart stochastic Galerkin and collocation methods. 
Elman, Howard 
Numerical Solution Algorithms for Discrete Partial Differential Equations with Random Data 
View Abstract
Hide Abstract

We discuss several methods for discretizing the problems that arise from finitedimensional approximations to partial differential equations with random data, including socalled Galerkin methods and collocation methods. The resulting algebraic systems of equations that arise in these settings are typically much larger than those that come from deterministic models. We describe the structure and properties of these algebraic equations, present several variants of multigrid and multilevel strategies that can be used to solve the resulting systems, and compare costs and other computational issues that arise from different choices of problem statements and discretization. 
Ernst, Oliver 
Efficient Solution of LargeScale Covariance Eigenproblems 
View Abstract
Hide Abstract

Ingolf Busch and Oliver Ernst,
TU Bergakademie Freiberg, Institut für Numerische Mathematik und Optimierung
The KarhunenLoève expansion is a popular allpurpose technique for the representation of random fields given the mean field and a covariance function. Computationally, this involves the solution of an eigenvalue problem for an integral operator acting on functions defined on a 2D or 3D domain, so that, after discretization, one is faced with the solution of a large, dense matrix eigenproblem. We present an approach for efficiently solving such problems using a restarted Lanczos method combined with hierarchical matrix techniques as well as enhanced quadrature schemes for handling the singularity along the diagonal. 
Goldstein, Michael 
Bayesian uncertainty analysis for complex physical models 
View Abstract
Hide Abstract

Accounting for, and analysing, all the sources of uncertainty (input uncertainty, functional uncertainty, initial condition, boundary condition and forcing function uncertainty, structural uncertainty, observational uncertainty) that arise when using complex models to describe a large scale physical system (such as climate) is a very challenging task. I will give an overview of some Bayesian approaches for assessing such uncertainties, for purposes such as model calibration and forecasting. 
Gordon, Andrew 
Solving Stochastic collocation systems with algebraic multigrid 
View Abstract
Hide Abstract

Stochastic collocation methods facilitate the numerical solution of partial differential equations (PDEs) with random data and give rise to long sequences of similar discrete linear systems. When elliptic PDEs with random diffusion coefficients are discretized with standard finite element methods in the physical domain, the resulting collocation systems can be solved iteratively with the conjugate gradient method and algebraic multigrid (amg) is a highly robust preconditioner. When mixed finite element methods are applied, amg is also a key tool for solving the resulting sequence of saddle point systems via the preconditioned minimal residual method. In both cases, the stochastic collocation systems are trivial to solve when considered individually. The challenge lies in exploiting the systems' similarities to recycle information and minimize the cost of solving the entire sequence.
In this talk, we consider full tensor and sparse grid stochastic collocation schemes applied to a model stochastic elliptic problem and discretize in physical space using standard piecewise linear finite elements and lowest order RaviartThomas mixed finite elements. We propose efficient solvers for the resulting sequences of linear systems and show, in particular, that it is feasible to use finelytuned amg preconditioning for each system if key setup information is reused. Crucially, the preconditioners are robust with respect to variations in the discretization and statistical parameters for both stochastically linear and nonlinear diffusion coefficients. 
Graham, Ivan 
QuasiMonte Carlo Methods for flow in porous media with random data 
View Abstract
Hide Abstract

In this talk we formulate and implement quasiMonte Carlo (QMC) methods for computing the expectations of functionals of solutions of elliptic PDEs, with coefficients defined as Gaussian random fields. As we see, these methods outperform conventional Monte Carlo methods for such problems. Our main target application is the computation of several quantities of physical interest arising in the modeling of fluid flow in random porous media, such as the effective permeability or the exit time of a plume of pollutants. Such quantities are of great interest in uncertainty quantification in areas such as underground waste disposal, and here QMC is combined with a mixed finite element discretization in space. Our particular emphasis is on relatively high variance and low correlation length, leading to high stochastic dimension, where KarhunenLoeve expansions converge slowly. In this case Monte Carlo is currently the method of choice but, as we demonstrate, QMC methods are more effective and efficient for a range of parameters and quantities of interest. The talk will discuss both theoretical and computational aspects of this problem and include some applications involving up to 10^6 stochastic dimensions. This is joint work with Frances Kuo, Dirk Nuyens, Rob Scheichl and Ian Sloan 
Gunzburger, Max 
Numerical methods for partial differential equations having random inputs 
View Abstract
Hide Abstract

We provide a review of the basic members of different classes of methods used to approximate the solution of partial differential equations having random inputs. The classes included polynomial chaos, stochastic collocation, and stochastic sampling methods. We pay particular attention to connections and differences between the different classes. White noise, colored noise, and random parameter inputs are considered. 
Hall, Jim 
Calibration of flood models for risk analysis 
View Abstract
Hide Abstract

Examination of the potential impact of different fluvial flood defence strategies requires a physicallybased model to describe the distributed flow pattern through the channel and associated floodplain, since such a model is capable of adjustment to represent the different strategies. The objective is to be able to formulate the uncertainty estimates in such a way that they can be used within a risk analysis, where the risk is formed from an integral, over all forcing scenarios and the entire range of modelling errors, of the product of the probability of flooding and the damage caused.
In order to estimate the risk associated with each strategy, it is essential to be able to estimate downstream flood depth in a wide variety of flow conditions, along with associated uncertainties. The uncertainties in flood model predictions arise as a result of inadequacies in the model representation of the physical processes, as well as uncertainties in the choice of model parameters. Model inadequacy is defined for the unmodified river channel, and we assume that it remains unchanged with channel modifications.
An emulator, and the model inadequacy are described in terms of nonlinear transfer functions, based on the input (upstream) and output (downstream) time series, which are complex, but highly autocorrelated. Simulation from synthetic input time series, representing upstream flows, enables computation of risk associated with different strategies. 
Helton, Jon 
Uncertainty and sensitivity analysis in performance assessment for the proposed highlevel radioactive waste repository at Yucca Mountain, Nevada 
View Abstract
Hide Abstract

The design and implementation of uncertainty and sensitivity analyses in the 2008 performance assessment (PA) for the proposed repository for highlevel radioactive waste at Yucca Mountain (YM), Nevada, are described and illustrated. The analysis design for the 2008 YM PA is based on maintaining a clear separation between aleatory uncertainty and epistemic uncertainty and requires the use of a large number of complex models for a variety of physical processes. At its core, the analysis involves the use of Latin hypercube sampling in the propagation of epistemic uncertainty and both quadrature and samplingbased procedures in the propagation of aleatory uncertainty conditional on specific realizations of epistemic uncertainty. The use of Latin hypercube sampling provides the basis for a mapping between epistemically uncertain analysis inputs and corresponding analysis results that can be explored with a variety of sensitivity analysis procedures, including examination of scatterplots, stepwise rank regression and partial rank correlation coefficients. The overall analysis is quite large and involves 392 epistemically uncertain inputs and over 100 timedependent analysis results. Selected uncertainty and sensitivity analysis results will be used to illustrate the type of results obtained in the 2008 YM PA. 
Higham, Des 
Statistical inference in a Zombie outbreak model: is it safe to go out yet? 
View Abstract
Hide Abstract

The challenge of parameter estimation for ODE models has attracted the attention of the statistical inference community, and some powerful tools have been developed that go far beyond the traditional "leastsquares" style point estimates. Taking as an example a recently published nonlinear ODE model describing the science fiction scenario of a zombie outbreak, I will discuss the pros and cons of a Bayesian approach. This is joint work with Ben Calderhead and Mark Girolami (University of Glasgow). 
Hove, Joakim 
Uncertainty in the petroleum industry 
View Abstract
Hide Abstract

The petroleum industry makes extensive use of models of the underground for all aspects of reservoir management. Acquiring reliable information from the underground is both difficult and costly, and the models used are inherently unreliable. The initial reservoir models are built based on seismic surveys, and a few exploration wells. The seismic data is spatially extensive data with low precision, whereas the wells provide localized measurements of reasonably high accuracy. In addition a geological concept serves an important framework for interpretation.
Mathematically the models are based on 3D fields like flow permeability, porosity, initial fluid saturations, in addition to several low dimensional parameters like oil composition. In addition the structural model, i.e. the shape of the grid, is a very important property. All this input is uncertain, and a statistical description in terms of a prior distribution is essential. When the field has been producing we get new information, and this can be used to update the model. The process of updating the parameters in the model to reproduce the historical flow rates is traditionally called “History Matching”.
The history matching problem is severely underdetermined, so identifying the one right model in an optimization context is not feasible. Instead it should be seen as a conditioning problem in a Bayesian setting, where new information is used to determine a posterior distribution. Uncertainty studies are then performed by sampling from the posterior distribution. I will present two such methods for model updating developed in the research centre in Statoil. 
Kunoth, Angela 
Multiscale methods for the valuation of American options with stochastic volatility 
View Abstract
Hide Abstract

For the valuation of American options with stochastic volatility, we first derive an appropriate variational inequality formulation which enables finite element discretizations, in contrast to previous approaches. We propose specifically monotone multigrid methods for the fast numerical solution of the resulting linear inequality systems. Numerical results are provided for a certain benchmark problem of an American Put Option.
This is joint work with Christian Schneider. 
Labovsky, Alexander 
Effects of approximate deconvolution models on the solution of the stochastic NavierStokes equations 
View Abstract
Hide Abstract

We consider the family of Approximate Deconvolution Models (ADM) for the simulation of the turbulent stochastic NavierStokes equations (NSE). We investigate the effect stochastic forcing (through the boundary conditions) has on the accuracy of solutions of the ADM equations compared to direct numerical simulations. Although the existence, uniqueness and verifiability of the ADM solutions has already been proven in the deterministic setting, the analyticity of a solution of the stochastic NSE is difficult to prove. Hence, we approach the problem from the computational point of view. A Smolyaktype sparse grid stochastic collocation
method is employed for the approximation of first two statistical moments of the solution  the expected value and variance. We show that for different test problems, the modeling error in the stochastic case is the same as predicted for the deterministic setting. Although the ADMs are arguably only applicable for
certain boundary conditions (zero or periodic), we test the model on a problem with a boundary layer and recirculation region and demonstrate that the model correctly predicts the solution of the stochastic NSE with the noise in the boundary data. 
Le Maitre, Olivier 
Multiresolution for stochastic hyperbolic systems 
View Abstract
Hide Abstract

We present a novel method for the multiresolution analysis of hyperbolic systems (conservation laws) involving uncertain parameters. The multiresolution scheme rely on the adaptive construction of piecewise polynomial stochastic approximation spaces, whose structure is indexed on the space variable and time. This approach allows for the concentration of the computational effort into areas where the stochastic solution is discontinuous (i.e. along localized shocks in the spacetime domain), and to use coarse representation elsewhere. Examples of applications for the Burgers (scalar) and Euler (system) equations will be shown. 
Lee, HyungChun 
Approximation of an optimal control problem for Stochastic PDEs 
View Abstract
Hide Abstract

In this talk, we consider optimal control problems for stochastic elliptic partial differential equations. The control objective is to minimize the expectation of a cost functional, and the control is of the deterministic, boundary value type. The main analytical tool is the KarhunenLoeve (KL) expansion. Mathematically, we prove the existence of an optimal solution; we establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations; we represent the input data in their KL expansions and deduce the deterministic optimality system of equations. Computationally, we approximate the optimality system through the discretizations of the probability space and the spatial space by the finite element method; we also derive error estimates in terms of both types of discretizations. 
Lieberman, Chad 
Optimal design under uncertainty 
View Abstract
Hide Abstract

Engineering design is uncertain when it depends on an unknown and unobservable parameter of the system (e.g., hydraulic conductivity in the subsurface). Recent approaches attempt to quantify this uncertain parameter of the governing partial differential equations by solving a statistical inference problem that combines experimental data with prior knowledge. Since experiments are expensive and the statistical inference problem is often computationally intractable, we propose an integrated formulation that treats the experimentinferencedesign process from the perspective of the final design problem. Specifically, the objective function and constraints of the optimal design problem guide the inference process so that we only perform as many experiments and as much inference as is necessary to determine the optimal design. This approach stems from the realization that while the parameters we infer may be very highdimensional, the design choices are not. Therefore, we expect that convergence in design will occur long before convergence in parameter for many important engineering design problems. Our algorithm brings together experimental design, variational inference, and stochastic programming to exploit this notion. The algorithm is demonstrated on a model thermal problem. 
Marzouk, Youssef 
Tractable Bayesian inference and experimental design in complex physical systems 
View Abstract
Hide Abstract

Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decisionmaking. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data. Computationally intensive forward models, however, can render a Bayesian approach prohibitive.
We will show that spectral methods used to solve stochastic PDEs are an extremely useful tool in the inverse context as well. We introduce a stochastic spectral formulation that accelerates Bayesian inference via rapid exploration of a surrogate posterior distribution. Theoretical convergence results are verified with several numerical examplesin particular, parameter estimation in transport equations and in chemical kinetics. We also extend this approach to the inference of spatially distributed quantities in a hierarchical Bayesian setting.
Finally, we discuss the utility of polynomial expansions in optimal experimental designchoosing experimental conditions to maximize information gain in parameters or outputs of interest. A Bayesian formulation of the design problem fully accounts for uncertainty in relevant parameters and observables. 
Matthies, Hermann 
Low rankrepresentation numerical methods for uncertainty quantification equations 
View Abstract
Hide Abstract

Nowadays the trend of numerical mathematics is often to try to resolve inexact mathematical models by very exact deterministic numerical methods. The reason of this inexactness is that almost each mathematical model of a real world situation contains uncertainties in the coefficients, righthand side, boundary conditions, initial data, as well as in the geometry. Examples of uncertain input data are, for example, conductivity coefficients in groundwater flow problems. These mathematical models are often described by stochastic partial differential equations (SPDEs), where uncertainties are represented as random fields.
An efficient numerical solution of such problems requires an appropriate discretisation of the deterministic operator as well as the stochastic fields. The total number of degrees of freedom (dof) of the discrete model is the product of dofs of the deterministic and stochastic discretisations and can be, even after application of the truncated KarhunenLoève expansion (KLE) and polynomial chaos expansion, very high. Therefore data sparse techniques for representation of input and output data (solution) are necessary for efficient representation and computation.
In this work we demonstrate the compression of the input and output random fields via algorithms based on singular value decomposition and lowrank tensors. Particularly, compression of the output data is important for a further postprocessing (e.g. for solving corresponding inverse problems via Bayesian inferences).
We represent all stochastic realisations of the solution in a lowrank format. For every new solution vector an update for the lowrank approximation is computed on the fly. The storage requirements and computational complexity can be drastically reduced.
We demonstrate examples from aerodynamics  velocity and pressure fields near an airfoil  and flow through porous bodies.
Hermann G. Matthies, A. Litvinenko, M. Espig, E. Zander, B. Rosic, D. Liu 
Najm, Habib 
Uncertainty quantification in reacting flow 
View Abstract
Hide Abstract

Uncertainty quantification (UQ) is useful for informing engineering design, risk analysis, and decision support processes; and for enabling model validation with respect to observations. In the limit of small uncertainty, linear/perturbative UQ methods are useful. However, in the more general context of large uncertainty, particularly in systems exhibiting strong nonlinearity and/or bifurcations, other methods are necessary. This talk focuses on the use of probabilistic UQ methods in this context. I will survey the state of the art in the use of these methods for quantification of uncertainty in computational modeling. I will discuss the use of Bayesian methods for estimation of uncertain models from data. I will focus on spectral Polynomial Chaos (PC) methods for the forward propagation of uncertainty. I will discuss the utilization of these methods using both collocation and Galerkin methodologies, and will illustrate their application in computations of reacting flow. I will also discuss PC UQ constructions employing local bases, developed specifically to enable handling systems with bifurcations and/or strong nonlinearity; and will discuss aspects of model reduction under uncertainty. 
Nobile, Fabio 
Stochastic Galerkin and collocation methods for PDEs with random coefficients: a numerical comparison 
View Abstract
Hide Abstract

Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy versus computational work. The approximation spaces considered include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SCSM and SGTD being the best choices of approximation spaces for each method. Moreover, when using anisotropic approximation spaces, we propose a choice of the anisotropy ratios based on theoretical estimates of the smoothness of the solution. Numerical results confirm that such choice is nearly optimal.
References:
[1] J. Back, F. Nobile, L. Tamellini and R. Tempone, ``Stochastic
Galerkin and collocation methods for PDEs with random coefficients: a
numerical comparison,'' Proceedings of ICOSAHOM09, "Lecture Notes in computational Science and Engineering", to appear.
[2] F. Nobile and R. Tempone, ``Analysis and implementation issues for
the numerical approximation of parabolic equations with random
coefficients,'' Int. J. Num. Methods Engrg., 2009.
[3] F. Nobile, R. Tempone and C. Webster, ``An anisotropic sparse
grid stochastic collocation method for partial differential equations
with random input data,'' SIAM J. Numer. Anal., 2008.
[4] I. Babuska, F. Nobile and R. Tempone, ``A stochastic collocation
method for elliptic partial differential equations with random input
data,'' SIAM J. Numer. Anal., 2007. 
Owhadi, Houman 
Optimal uncertainty quantification 
View Abstract
Hide Abstract

While everyone agrees that Uncertainty Quantification is fundamental to objective science, it appears that there are no universally accepted UQ objectives and no accepted framework for the communication of UQ results. Moreover, on close inspection, it appears that in general there is a disconnect between the assumptions required by specific UQ techniques and the assumption/information set of the relevant applications. This disconnect is the cause of much confusion and disagreement in the community. Indeed, it appears that UQ is currently where probability theory was before its rigorous formalization by Kolmogorov.
We propose a rigorous framework in which the UQ objectives and assumption/information set are brought into the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that given a set of assumptions and information, there exists optimal bounds on uncertainties obtained as solutions of well defined optimization problems corresponding to maximizing probabilities of failures, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. The notion of optimality in presence of sample data is not self evident and we show that our formulation is a generalization of the well established notion of Uniformly Most Powerful test in statistical hypothesis testing.
Although OUQ optimization problems are extremely large we show that they are characterized by significant and practical finite dimensional reduction properties. More precisely, although optimization variables live in a space of functions tensorized with measures of probability, under general conditions, the extreme points are products of finite convex combinations of delta masses. Furthermore under very general conditions we can further reduce the space of functions of the optimization problem to functions on a product of finite discrete spaces. Optimal concentration inequalities are obtained as illustrations of those reductions properties.
We also propose an OUQ optimization algorithm for arbitrary constraints, leveraging (possibly hidden) reduction properties. Numerical experiments illustrate the convergence of this algorithm and the low order complexity and the singularity of extremal points.
This is a joint work with Clint Scovel, Tim Sullivan, Mike McKerns and Michael Ortiz. 
Phipps, Eric 
Intrusive Stochastic Galerkin methods for uncertainty quantification of nonlinear Stochastic PDEs 
View Abstract
Hide Abstract

A critical component of predictive simulation is the ability to characterize uncertainties in simulation input data and quantify the effects of those uncertainties on simulation results. Frequently systems of interest are modeled by stochastic partial differential equations (PDEs) where random variables or fields with given probability distributions model data uncertainties. In this setting, stochastic Galerkin methods are a wellknown family of methods for approximating solutions to these systems. However implementing these methods in largescale engineering codes is hampered by the fact that they require solving a coupled spatialstochastic nonlinear system that is different from the deterministic nonlinear system the codes were originally designed for. However, good performance has been obtained with these methods for linear stochastic PDEs due to the many fewer degreesoffreedom required for a given level of accuracy compared to samplingbased methods such as stochastic collocation. We investigate the application of these methods to representative nonlinear stochastic PDEs using NewtonKrylov solvers. For problems with large stochastic dimension, we find these methods to be much more expensive, with the increased cost resulting from the matrixvector multiplies required by the iterative solver. We then present approaches for reducing the cost of the matrixvector products based on random field modeling techniques. 
Sahai, Tuhin 
Uncertainty quantification of hybrid dynamical systems 
View Abstract
Hide Abstract

Hybrid dynamical systems (or systems with discontinuous vector fields) arise frequently in models of electrical and embedded systems, manufacturing machines, control systems and chemical processes, to name a few. Traditional algorithms for modeling and analysis of continuous systems are typically inapplicable due to nonsmooth dynamics. Uncertainty quantification of hybrid systems is particularly challenging. In this talk, we present and analyze various methods for computing uncertainty bounds in hybrid systems. We also demonstrate these methods on illustrative examples. 
Scheichl, Rob 
Novel Monte Carlo type methods for elliptic PDEs with random coefficients 
View Abstract
Hide Abstract

Uncertainty and its quantification play a major role in decision processes in the modern world. Two particular areas of huge current interest are the safety assessments of radioactive waste disposal and of CO2 capture and storage underground. In this talk we will present a common way to model the data uncertainty in these applications through stochastic modelling of the rock permeabilities leading to a stochastic PDE for the groundwater flow. A typical computational goal is then the estimation of the expected value or higher order moments of some relevant quantities of interest, such as the effective permeability or the breakthrough time of a plume of radionuclides. Solving these kinds of problems has become an area of large current activity in the numerical analysis and scientific computing communities. Because of the typically large variances and short correlation lengths in our application, methods based on truncated KarhunenLoeve expansions are largely inapplicable and Monte Carlo type methods are still the method of choice. To overcome the notoriously slow convergence of conventional Monte Carlo, we formulate and implement novel methods based on (i) deterministic rules to cover probability space (QuasiMonte Carlo) and (ii) hierarchies of spatial grids (multilevel Monte Carlo). It has been proven theoretically for both of these approaches that for certain classes of problems they have the potential to significantly outperform MC. It is not known whether the porous media flow applications discussed here belong to either of those problem classes, but experimentally our numerical results show that both methods do indeed always clearly outperform conventional Monte Carlo also for these more complicated problems. 
Shardlow, Tony 
Milstein method for stochastic delay differential equations 
View Abstract
Hide Abstract

We introduce a simple method for proving convergence of the Milstein method for stochastic delay differential equations. 
Sloan, Ian 
High dimensional integration and approximation 
View Abstract
Hide Abstract

High dimensional integration and approximation problems arise in a natural way whenever there is uncertainty with respect to continuous (typically time or space) variables, because such problems involve (in principle) an infinite number of realnumber random variables; and for every random variable added to the description, there is one more dimension.
In this short course I will discuss integration and approximation methods based on sampling a given highdimensional function at a finite number of points. The methods include Monte Carlo, quasiMonte Carlo and sparse grid methods.
For the Monte Carlo method (based on random numbers) the general emphasis is on reducing the variance. For the other methods, which are deterministic in nature, reducing the variance is still useful, but there are now many other concerns. Often a key concern is to reduce the effective dimensionality of a problem: a nominally highdimensional problem can sometimes be approximated by a lowdimensional one; or the given problem can be converted to a lowdimensional one by a judicious change of variable.
For the quasiMonte Carlo method a simple relabelling of the variables can be important, because (as we shall explain) quasiMonte Carlo are typically more effective in their early dimensions.
Geometry of the integration or approximation region is another major concern. For example, if the problem is to estimate an expected value with respect to a highdimensional multivariate Gaussian probability distribution, then the given domain of integration is a multidimensional Euclidean space. Methods of transforming such integrals to the often preferred unit cube will be discussed.
In approximation a central concern is the choice of the (sparse) approximation space: it can be either global (polynomial or trigonometric polynomial) or piecewise polynomial. 
Stoyanov, Miroslav 
Stochastic Peridynamics and Finite Temperature Molecular Dynamics 
View Abstract
Hide Abstract

Peridynamics is a formulation of continuum solid mechanics based on integral equations. We view the model as upscaling from discrete molecular dynamics and we introduce a stochastic thermostat. The result is a stochastic peridynamicsthermostat model. In addition we consider discontinuous Galerkin finite element scheme for accurate simulations of the model. 
Stuart, Andrew 
Bayesian wellposedness for inverse problems 
View Abstract
Hide Abstract

I will describe a theory of wellposed for inverse problems arising in differential equations. The approach is based on adopting a Bayesian viewpoint on function space. Under natural assumptions on the forward problem, this gives Lipschitz continuity of the posterior measure with respect to changes in the data. Applications are numerous and
include data assimilation in fluid mechanics and subsurface geophysics. 
Tartakovsky, Daniel 
PDF methods for uncertainty quantification 
View Abstract
Hide Abstract

We consider a class of physical phenomena that are described by parabolic nonlinear partial differential equations with uncertain coefficients. To quantify predictive uncertainty in such systems, we treat uncertain coefficients as random fields with known statistics, which renders the corresponding governing nonlinear differential equations stochastic. We derive a deterministic equation for the probability density function (PDF) of the system state. By going beyond computing system state's mean and variance, which is the standard practice in many uncertainty quantification studies, the PDF equations enable one to compute probabilities of rare events (distribution tails), which are required in modern probabilistic risk analyses. 
Tavener, Simon 
Sensitivity analysis for parametrized nonlinear maps and o.d.e.s 
View Abstract
Hide Abstract

Parametrized systems of nonlinear differential equations and nonlinear maps arise in a number of applications in physiology and ecology, as well as in other areas of science and technology. Traditional, forward sensitivity analysis based on linearization around a chosen set of parameters provides sensitivity information about that single set of values. When there is uncertainty regarding the values of the parameters or when the parameters are known to be stochastic quantities, traditional sensitivity analysis therefore provides information at a single point in what may be a high dimensional space of possible parameter values. We examine the use of spectral collocation techniques coupled with sparse grid ideas to provide a different measure of sensitivity for nonlinear multiparameter systems which provides sensitivity information that is integrated appropriately over the entire multidimensional parameter space. This general framework has the potential to be modified in order to perform parameter fitting for nonlinear models and to conduct an inverse sensitivity analyses. 
Teckentrup, Aretha 
Multilevel Monte Carlo for partial differential equations with random coefficients 
View Abstract
Hide Abstract

When solving partial differential equations (PDEs) with random coefficients numerically, one is usually interested in finding the expected value of a certain statistic of the solution. A common way to obtain estimates is to use Monte Carlo methods combined with spatial discretisations of the PDE on sufficiently fine grids. However, standard Monte Carlo methods have a rather slow rate of convergence with respect to the number of samples used, and individual samples of the solution are usually costly to compute numerically. In this talk we introduce the multilevel Monte Carlo method, with the aim of achieving the same accuracy of standard Monte Carlo at a much lower computational cost. The method exploits the linearity of expectation, by expressing the quantity of interest on a fine spatial grid in terms of the same quantity on a coarser grid and some “correction” terms. It has been extensively studied in the context of stochastic differential equations in the area of financial mathematics by Mike Giles and coauthors. We will give an outline of the method applied to elliptic PDEs with random coefficients, and also show some numerical results on the reduction of the computational cost resulting from it. The efficiency of the multilevel method is assessed by comparing it to standard Monte Carlo. 
Ullmann, Elisabeth 
Iterative Solvers for Stochastic Galerkin discretizations of PDEs with random data 
View Abstract
Hide Abstract

Many physical processes occuring in different areas of science and engineering are modelled by partial differential equations (PDEs). The numerical simulation of such processes requires input data which are, however, often subject to considerable uncertainty. Quantitative statements on the effect of these data uncertainties are therefore desirable for the evaluation of simulation results.
Stochastic Galerkin methods are a wellestablished discretization tool for PDEs with random data modelled in terms of random fields. The method couples physical degrees of freedom arising from standard finite element discretizations with stochastic degrees of freedom. For this reason the number of unknowns in the Galerkin equations grows rapidly: stochastic Galerkin equations can involve up to 10,000 times more unknowns than deterministic Galerkin equations. Consequently, the design of efficient and robust iterative solvers for these huge linear systems of equations is essential for the numerical simulation with random data.
In recent years mostly two types of stochastic shape functions have been studied: multivariate orthogonal polynomials (spectral stochastic finite element method) and interpolation polynomials associated with certain quadrature nodes (stochastic collocation method). In this presentation we focus on the first approach and consider the stochastic diffusion equation as a model problem. Since stochastic shape functions based on orthogonal polynomials preclude the decoupling of the Galerkin equations in general [3], efficient preconditioners for the coupled Galerkin equations are required.
We review a recently proposed Kronecker product preconditioner [1] which  in contrast to a popular meanbased preconditioner  makes use of the entire information contained in the Galerkin matrix. Furthermore, we extend the idea of Kronecker product preconditioning to the discretized mixed formulation of the stochastic diffusion equation [2]. We demonstrate numerically the improved robustness of Kronecker product preconditioners compared to the meanbased approach with respect to key statistical parameters where the diffusion coefficient is a lognormal random field. This model arises, for example, from stationary groundwater flow simulations with random permeabilities.
References:
[1] E. Ullmann: A Kronecker Product Preconditioner for Stochastic Galerkin Finite Element Discretizations. To appear in SIAM J. Sci. Comput. (2010).
[2] C. E. Powell, E. Ullmann: Preconditioning Stochastic Galerkin Saddle Point systems, MIMS EPrint 2009.88, School of Mathematics, University of Manchester, ISSN 17499097. Submitted, November 2009.
[3] O. G. Ernst, E. Ullmann: Stochastic Galerkin Matrices. To appear in SIAM J. Matrix Anal. Appl. (2010). 
von Schwerin, Erik 
Adaptive Multi Level Monte Carlo Simulation 
View Abstract
Hide Abstract

This talk presents a generalition of a multilevel Forward Euler Monte Carlo method introduced in [1] for the approximation of expected values depending on the solution to an Ito stochastic differential equation. The work [1] proposed and analyzed a Forward Euler Multilevel Monte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. Here we apply the multilevel approach on a hierarchy of non uniform time discretizations, generated by adaptive algorithms introduced in [2,3]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on adjoint weighted a posteriori error expansions first developed in [4]. We will present numerical results, which include one case with singular drift and one with stopped diffusion, which exhibit savings in the computational cost to achieve an accuracy of O(TOL) from, for the single level adaptive algorithm O(TOL^3) to
O({log(TOL)/TOL}^2)
This is a joint work with H. Hoel, A. Szepessy and R. Tempone.
[1] Giles, M. B., Multilevel Monte Carlo path simulation,
[2] Moon, KS., von Schwerin, E. Szepessy, A. and Tempone, R.,
An adaptive algorithm for ordinary, stochastic and partial differential equations,
[3] Moon, KS. Szepessy, A. Tempone, R. and Zouraris, G. E., adaptive weak approximation equations,
[4] Szepessy, A. Tempone, R. and Zouraris, G. E.: Adaptive weak approximation of stochastic differential equations 
Webster, Clayton 
The analysis and applications of sparse grid stochastic collocation techniques within the context of uncertainty quantication 
View Abstract
Hide Abstract

Our modern treatment of predicting the behavior of physical and engineering problems relies on mathematical modeling followed by computer simulation. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. Therefore, the goal of the mathematical and computational analysis becomes the prediction of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of some responses of the system (quantities of physical interest), given the probability distribution of the input random data. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. The resulting explosion in computational effort is a symptom of the curse of dimensionality. Sparse grid techniques yield methods to discretize these higher dimensional problems with a feasible amount of unknowns leading to usable methods. It is the aim of this talk to survey the fundamentals and analysis of standard sparse grid stochastic collocation (SC) methods as well as a dimensional adaptive (anisotropic) sparse grid SC approach within the context of uncertainty quantification. This talk includes both a priori and a posteriori approaches to adapt the anisotropy of the sparse grids to applications of both linear and nonlinear problems. Our rigorously derived error estimates, for the fully discrete problem, will be described and used to compare the efficiency of the method with several other techniques. These methods have proven to have dramatic impact on several application areas, including financial mathematics, statistical mechanics, bioinformatics, and other fields that must properly predict certain model behaviors. However, in many of these fields it is often the case that not all random input coefficients can be fully realized, and therefore, in this talk we will provide a mechanism for determining statistical information about the input parameters from, e.g., measurements of output quantities. This parameter identification algorithm couples an adjointbased deterministic algorithm with the sparse grid SC approach, for tracking statistical quantities of interest from the computational solutions of PDEs driven by random inputs. Numerical examples illustrate the theoretical results and are used to show that, for moderately large dimensional problems, the sparse grid approach with a properly derived anisotropy is very efficient and superior to all examined methods, including Monte Carlo. 
Worden, Keith 
Bayesian sensitivity analysis of a heart valve model 
View Abstract
Hide Abstract

If it would be interesting to the audience; this would be about Bayesian sensitivity analysis of a large nonlinear finite element model of a heart valve. The model possesses material nonlinearity and also has contacts. We know that the model bifurcates over the required parameter ranges and I could discuss how this might be addressed. 
Xiu, Dongbin 
Uncertainty analysis for complex systems: algorithms and data 
View Abstract
Hide Abstract

The field of uncertainty quantification has received increasing amount of attention recently. Extensive research efforts have been devoted to it and many novel numerical techniques have been developed. These techniques aim to conduct stochastic simulations for largescale complex systems. In this talk we will review one of the most widely approaches  generalized polynomial chaos (gPC). The gPC based methods employ orthogonal polynomials in random space and take advantage of the solution smoothness (whenever possible). The features of various gPC numerical schemes will be reviewed. Furthermore, we will discuss how real observational data can be utilized and combined with stochastic simulations. The resulting datadriven uncertainty analysis can provide much more insight to the true physics and produce predictions with high fidelity. 
Zabaras, Nicholas 
Model reduction for Stochastic PDEs 
View Abstract
Hide Abstract

We will discuss methods for addressing the curseofdimensionality in the solution of stochastic PDEs. We will start with model reduction techniques for datadriven stochastic input models. They include multidimensional scaling, kernel PCA and a bioorthogonal KLE expansion for capturing variability of microstructure topology in the continuum. Particular developments will be shown in heterogeneous media. We will then proceed to describe the High Dimensional Model Representation (HDMR) technique to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lowerorder to higherorder component functions. An adaptive version of HDMR is developed to automatically detect the important stochastic dimensions and construct higherorder terms only as a function of the important dimensions. In this work, we integrate the adaptive sparse grid collocation (ASGC) method with HDMR to solve the resulting subproblems. This results in a lowdimensional stochastic reducedorder model of the highdimensional stochastic problem. A number of examples will be shown including stochastic multiscale modeling of flow through random heterogeneous media. 