European Numerical Mathematics and
Advanced Applications Conference 2019
30th sep - 4th okt 2019, Egmond aan Zee, The Netherlands
08:30   MS43: Efficient parametrization of complex models (Part 1)
Chair: Christian Rieger
08:30
25 mins
Adaptive Localized Reduced Basis Methods
Andreas Buhr, Mario Ohlberger, Stephan Rave, Felix Schindler
Abstract: Snapshot-based model order reduction techniques such as Reduced Basis (RB) methods (e.g. [1]) have been successfully applied in a wide range of application areas to obtain online efficient parametrized reduced order models (ROMs) which can act as a high-quality surrogate for the full order model (FOM) in many-query or realtime simulation applications. However, in many practical cases the computational effort for the creation of the ROM must be taken into account. For very large problems, even the computation of few solution snapshots of the FOM can be prohibitively expensive. Recently, several localized RB techniques have been studied which aim at mitigating this issue by constructing local reduced approximation spaces by a spatial partitioning of the computational domain and solving adequate local problems on the individual subdomains. In [2] (and references therein) a Localized Reduced Basis Multiscale Method (LRBMS) was introduced in the context of elliptic multiscale problems, which is based on a discontinuous Galerkin coupling of the localized reduced order systems at the domain interfaces. In [3] the ArbiLoMod method was introduced which is based on a conforming coupling of the subdomains via additional subdomain interface approximation spaces. For both methods rigorous a posteriori estimates of the model reduction error ensure the accuracy of the ROM approximations of the solution ([2, 3, 4]). Moreover, these estimates can be used to steer an enrichment of the local RB spaces after an initial offline preparation of the ROM, allowing to adaptively fit the ROM to different parameter ranges. In this contribution we present recent advances in the development of localized RB methods. Emphasize will lie on a discussion of the enrichment process and the balancing of the computational effort for enrichment with the required effort for the initial training of the ROM. In particular, we will discuss connections with domain decomposition methods, where the training of the ROM can be seen as the computation of a coarse space that ensures the scalability of later enrichment steps. References [1] A. Quarteroni, A. Manzoni, and F. Negri. Reduced Basis Methods for Partial Differential Equations, volume 92 of La Matematica per il 3+2. Springer International Publishing, 2016. [2] M. Ohlberger and F. Schindler. Error control for the localized reduced basis multiscale method with adaptive on-line enrichment. SIAM J. Sci. Comput., 37(6):A2865–A2895, 2015. [3] A. Buhr, C. Engwer, M. Ohlberger, and S. Rave. ArbiLoMod, a simulation technique designed for arbitrary local modifications. SIAM J. Sci. Comput., 39(4):A1435–A1465, 2017. [4] M. Ohlberger, S. Rave, and F. Schindler. True error control for the localized reduced basis method for parabolic problems. In P. Benner, M. Ohlberger, A. Patera, G. Rozza, and K. Urban, editors, Model Reduction of Parametrized Systems, number 17 in MS&A, pages 169–182. Springer International Publishing, 2017.
08:55
25 mins
A Variational Formulation for LTI-Systems and Model Reduction
Karsten Urban, Moritz Feuerle
Abstract: We consider linear dynamical systems. Motivated by recent research on space-time variational methods for evolutionary problems in the context of the Reduced Basis Method (RBM), we consider a variational formulation of the dynamical system. In the spirit of Model Order Reduction (MOR), we consider the control as a parameter - which means that we need to consider a (discretized) parameter function. This setting allows us (1) to construct (standard) residual-based error estimates; (2) to apply a reduction with respect to the time variable; (3) to introduce an online-efficient reduction of the control. We introduce the framework, show some of the analytical results and provide numerical examples. This talk is based upon joint work with Moritz Feuerle (Ulm).
09:20
25 mins
Biomechanical surrogate modelling using stabilized vectorial greedy kernel methods
Bernard Haasdonk, Tizian Wenzel, Gabriele Santin, Syn Schmitt
Abstract: Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-based modelling and function approximation. Based on a recent idea of stabilization [11] of such algorithms in the scalar output case, we here consider the vectorial extension built on VKOGA [12]. We introduce the so called γ-restricted VKOGA, comment on analytical properties and present numerical evaluation on data from a clinically relevant application, the modelling of the human spine. The experiments show that the new stabilized algorithms result in improved accuracy and stability over the non-stabilized algorithms.
09:45
25 mins
Convergence Rates for Matrix P-Greedy Variants
Dominik Wittwar, Bernard Haasdonk
Abstract: When using kernel interpolation techniques for constructing a surrogate model, the choice of interpolation points is crucial for the quality of the surrogate. When dealing with vector-valued target functions which are approximated by matrix-valued kernel models, the selection problem is further complicated as not only the choice of points but also the directions in which the data is projected must be determined.\newline\indent We thus propose variants of Matrix P-greedy algorithms that enable us to iteratively select suitable sets of point-direction pairs with which the approximation space is enriched. We show that the selected pairs result in quasi-optimal convergence rates. Experimentally, we investigate the approximation quality of the different variants.