European Numerical Mathematics and
Advanced Applications Conference 2019
30th sep - 4th okt 2019, Egmond aan Zee, The Netherlands
15:45   MS40: Machine Learning for Numerical Simulation
Chair: Stefan Turek
15:45
25 mins
Machine Learning in adaptive domain decomposition methods -- predicting the geometric location of constraints
Alexander Heinlein, Axel Klawonn, Martin Lanser, Janine Weber
Abstract: The convergence rate of domain decomposition methods is generally determined by the eigenvalues of the preconditioned system. For second-order elliptic partial differential equations, coefficient discontinuities with a large contrast can lead to a deterioration of the convergence rate. A remedy can be obtained by enhancing the coarse space with elements, which are often called constraints, that are computed by solving small eigenvalue problems on portions of the interface of the domain decomposition, i.e., edges in two dimensions or faces and edges in three dimensions. In the present work, without restriction of generality, the focus is on two dimensions. In general, it is difficult to predict where these constraints have to be computed, i.e., on which edges. Here, a machine learning based strategy using neural networks is suggested to predict the geometric location of these edges in a preprocessing step. This reduces the number of eigenvalue problems that have to be solved during the iteration. Numerical experiments for model problems and realistic microsections using regular decompositions as well as those from graph partitioners are provided, showing very promising results.
16:10
25 mins
Basic Machine Learning Approaches for the Acceleration of PDE Simulations
Hannes Ruelmann, Markus Geveler, Stefan Turek
Abstract: While there is no doubt that Machine Learning empowers several scientific fields and industries and that it fires the hardware markets to adjust their portfolio to it, the major question asked when it comes to actually using it in Scientific (High Performance-) Computing is how to employ it alongside traditional methods efficiently. This imposes the necessity to not only adjust existing methods in the field of artificial neural networks to existing simulation pipelines but also for a careful performance modelling and -engineering. Both require a fundamental understanding of the design and theory of Machine Learning. In the course of discretising Partial Differential Equations (PDEs) for real-world simulations at a certain point we have to deal with a high number of degrees of freedom leading to the global system matrix being large and sparse. Hence, iterative methods have to be chosen over direct ones. In the former everything breaks down to how clever the linear solver can adapt to the system to be solved and here using specially tailored solvers that are implemented in a target hardware-oriented way can be orders of magnitude faster than simple ones. For this very general case of the solution of linear systems arising in simulations based on PDEs, we demonstrate simple and yet comprehensive Machine Learning methods [Ruelmann 2017][Ruelmann et al. 2018] that can accelerate these. We discuss their design, implementation, potential and efficiency in the context of modern and future compute hardware such as General Purpose GPUs and Tensor Processing Units (TPUs) that are about to hit the mass markets. In addition we provide insight into other practical situations were Machine Learning can be applied in simulation pipelines and discuss the limits and opportunities concerning implementation, functionality and performance. References RUELMANN, H., GEVELER, M., AND TUREK, S. 2018. On the prospects of using machine learning for the numerical simulation of PDEs: Training Neural Networks to assemble Approximate Inverses. ECCOMAS newsletter issue 2018, 27–32. RUELMANN, H. 2017. Approximation von Matrixinversen mit Hilfe von Machine Learning. Master’s thesis, TU Dortmund, Dortmund, Germany
16:35
25 mins
Optimizing Geometric Multigrid with Evolutionary Computation
Jonas Schmitt
Abstract: In many cases the construction of an efficient multigrid solver is a difficult task which requires a high degree of expertise in numerical mathematics. Here we present how evolutionary computation, a subfield of artificial intelligence, can be used to optimize geometric multigrid methods in a fully automatic way. A multigrid solver is represented in form of mathematical expressions which we generate based on a tailored grammar. The quality of each solver is evaluated in terms of convergence and compute performance using automated Local Fourier Analysis (LFA) and performance modeling, respectively, which forms the basis for a multi-objective optimization with strongly typed genetic programming. To target concrete applications scalable implementations of an evolved solver can be automatically generated with the ExaStencils code generation framework. We demonstrate our approach by constructing multigrid solvers that outperform well-known methods in a number of test cases.
17:00
25 mins
Logistic Regression in Potential Modeling
Samuel Kost, Oliver Rheinbach, Helmut Schaeben
Abstract: Logistic regression for the targeting of ressources (potential modeling) is described. In comparison with neural networks, regression methods can have the advantage of better interpretability of the results and a better understanding of the problem, since explicit models are used. A search strategy is also described to find a suitable explicit models.