European Numerical Mathematics and
 10:30 25 mins Energy estimates and model order reduction for bilinear systems with Lévy noise Martin Redmann Abstract: In this talk, we investigate a large-scale stochastic system with bilinear drift and linear diffusion term. Such high dimensional systems appear for example when discretizing a stochastic partial differential equations in space. We study a particular model order reduction technique to reduce the order of spatially-discretized systems and hence reduce computational complexity. We introduce suitable Gramians to the system and prove energy estimates that can be used to identify states which contribute only very little to the system dynamics. The reduced system is obtained by removing these states from the original system. In this talk, we present an L²-error bound for the proposed method applied to stochastic bilinear systems. This result is new even for deterministic bilinear equations. In order to achieve it, we develop a new technique which is not available in the literature so far. 10:55 25 mins A sequential sensor selection strategy for hyper-parameterized linear Bayesian inverse problems Nicole Aretz, Peng Chen, Martin Grepl, Karen Veroy-Grepl Abstract: We consider optimal sensor placement for hyper-parameterized linear Bayesian inverse problems, where the hyper-parameter characterizes nonlinear flexibilities in the forward model, and is considered for a range of possible values. This model variability needs to be taken into account for the experimental design to guarantee that the Bayesian inverse solution is uniformly informative. In this work we link the numerical stability of the maximum a posterior point and A-optimal experimental design to an observability coefficient that directly describes the influence of the chosen sensors. We propose an algorithm that iteratively chooses the sensor locations to improve this coefficient and thereby decrease the eigenvalues of the posterior covariance matrix. This algorithm exploits the structure of the solution manifold in the hyper-parameter domain via a reduced basis surrogate solution for computational efficiency. We illustrate our results with a steady-state thermal conduction problem. 11:20 25 mins Analysis of the dynamical low rank equations for random semi-linear parabolic problems Yoshihito Kazashi, Fabio Nobile Abstract: In this joint work with Fabio Nobile, we will discuss a reduced basis method called the Dynamically Low Rank (DLR) approximation, to solve numerically semilinear parabolic partial differential equations with random parameters. The idea of this method is to approximate the solution of the problem as a linear combination of products of dynamical deterministic and stochastic basis functions, both of which evolve over time. The DLR approximation is given as a solution of a semi-discrete, highly nonlinear system of equations. Our interest in this talk is in an existence result: we apply the DLR method to a class of semi-linear random parabolic evolutionary equations, and discuss the existence of the solution of the resulting semi-discrete equation. It turns out that finding a suitable equivalent formulation of the original problem is important. After introducing this formulation, the DLR equation is recast to an abstract Cauchy problem in a suitable linear space, for which existence and uniqueness of the solution are established. This work is motivated by [1--3]. 11:45 25 mins Low-rank tensor train methods for Isogeometric analysis Alexandra Buenger, Martin Stoll Abstract: Isogeometric analysis (IgA) is a popular method for the discretization of partial differential equations motivated by the use of NURBS (Non-uniform rational B-splines) for geometric representations in industry and science. In IgA the domain representation as well as the discrete solution of a PDE are described by the same global spline functions. However, the use of an exact geometric representation comes at a cost. Due to the global nature and large overlapping support of the basis functions, system matrix assembly becomes especially costly in IgA. To reduce the computing time and storage requirements low-rank tensor methods have become a promising tool. We successfully constructed a framework appying low rank tensor train calculations to IgA to efficiently solve PDE-constrained optimization problems on complex three dimensional domains without assembly of the actual system matrices. The method exploits the Kronecker product structure of the underlying spline space, reducing the three dimensional system matrices to a low-rank format as the sum of a small number of Kronecker products $M = \sum_{i=1}^n M_i^{(1)} \otimes M_i^{(2)} \otimes M_i^{(3)}$, where $n$ is determined by the chosen size of the low rank approximation. For assembly of the smaller matrices $M_i^{(d)}$ only univariate integration in the corresponding geometric direction $d$ is performed, thus significantly reducing computation time and storage requirements. The developed method automatically detects the ranks for a given domain and conducts all necessary calculations in a memory efficient low rank tensor train format. We present the applicability of this framework to efficiently solve large scale PDE-constrained optimization problems as well as an extension to statistical inverse problems using the iterative AMEn block solve algorithm which preserves and exploits the low rank format of the system matrices.