European Numerical Mathematics and
Advanced Applications Conference 2019
30th sep - 4th okt 2019, Egmond aan Zee, The Netherlands
10:40   MS34: Randomized algorithms and parametrized PDE (Part 2)
Chair: Oliver Zahm
10:40
25 mins
Adaptive approximation by optimal weighted least-squares methods
Giovanni Migliorati
Abstract: We discuss adaptive methods based on optimal weighted least squares for the numerical approximation of the solution to elliptic PDEs with stochastic data. Such methods approximate the parameter-to-solution map using a suitable multivariate polynomial approximation space, that is adaptively constructed from snapshots of the PDE solution. When the number of snapshots is only linearly proportional to the dimension of the approximation space, up to log terms, the method is provably stable and accurate, and its approximation error is comparable to the best approximation error achievable on the same space.
11:05
25 mins
A hierarchical sampling approach for polynomial approximation in unbounded domain.
Moulay Abdellah Chkifa
Abstract: The approximation of parameterized functions, often arising as solutions of parameterized PDEs with a parameter vector varying in the unbounded domain is a major challenge. Besides the inherent curse of dimensionality which ensues from accounting for very high-dimensional parameter vectors for high-fidelity models, the domain unboundedness also poses a real challenge. We consider the setting where a parameter vector consists of d independent gaussian. While it is natural in the context of parameterized PDEs to describe individual parameter in a Gaussian-like manner with a dominant value (mean) and a tendency of variation around it (variance), the analysis for the approximation of the associated solutions by means of collocation techniques is less advanced compared to the case of bounded domain (for instance uniformly distributed parameters). In the present work, we present an overview of stochastic collocation techniques in the aforementioned setting, mainly sparse grids interpolation based on Gauss-Hermite abscissas and weighted Leja sequences, and then introduce a new interpolation scheme with proven stability properties. Some numerical experiments will also be presented.
11:30
25 mins
Randomized residual-based error estimators for parametrized equations
Kathrin Smetana, Olivier Zahm, Anthony T Patera
Abstract: In this talk we discuss randomized a posteriori error estimators for reduced order approximations of parametrized (partial) differential equations. First, we will present an error estimator that estimates the error between a discrete reference solution, sometimes also called the ``truth'' solution, and the reduced order approximation in a desired error measure, e.g. user-defined norms or quantity of interest [SmZaPa19]. We do not make assumptions on the type of method which is used to compute the reduced order approximation. The error estimator has several important properties: the effectivity is close to unity with prescribed lower and upper bounds at specified high probability; the estimator does not require the calculation of stability (coercivity, or inf-sup) constants; the cost to evaluate the a posteriori error estimator is commensurate with the cost to find the reduced order approximation; the probabilistic bounds extend to many queries with only modest increase in cost. To build this estimator, we first estimate the norm of the error with a Monte-Carlo estimator using Gaussian random vectors whose covariance is chosen according to the desired error measure. Then, we introduce a dual problem with random right-hand side the solution of which allows us to rewrite the error estimator in terms of the residual of the original equation. In order to have a fast-to-evaluate estimator, model order reduction methods can be used to approximate the random dual solutions. Here, we propose a greedy algorithm that is guided by a scalar quantity of interest depending on the error estimator. Numerical experiments on a multi-parametric Helmholtz problem demonstrate that this strategy yields rather low-dimensional reduced dual spaces. Second, we extend the above framework, discussing an error estimator that estimates the error between the exact solution of the partial differential equation and the reduced order approximation in certain error measures. To that end, we exploit Johnson-Lindenstrauss type results in infinite dimensional Hilbert spaces. Again the effectivity of the error estimator is close to unity and no calculation or estimation of stability constants is required. [SmZaPa19] K. Smetana, O. Zahm, and A. T. Patera, Randomized residual-based error estimators for parametrized equations, SIAM J. Sci. Comput., 41(2):A900-A926, 2019.