Papers by Matthias Moller
Computing, Mar 16, 2014
In the original publication, the Fig. (left diagram) has been published incorrectly. The correcte... more In the original publication, the Fig. (left diagram) has been published incorrectly. The corrected Fig. is given on the following page: The online version of the original article can be found under

predicted boundary value, e.g., U * * * corrected boundary value, e.g., U * * xi C Jacobian and T... more predicted boundary value, e.g., U * * * corrected boundary value, e.g., U * * xi C Jacobian and Transformation Matrices D Proof of Theorem 4.6.1 xiv Contents Modern high-resolution schemes commonly in use rely on a suitable blending of both imperfect methods to recover the high accuracy of the original approximation in regions where the solution is sufficiently smooth and switch to the diffusive low-order scheme in the vicinity of discontinuities. Thus, there are three crucial ingredients to be considered: a high-order approximation for the accurate computation of smooth solutions, a non-oscillatory low-order scheme used near steep gradients, and a mechanism to automatically switch between both methods. In this work, the Algebraic Flux Correction (AFC) methodology is adopted which was developed by Kuzmin [133, and Kuzmin and Turek [144, for the design of high-resolution finite element schemes applicable to convection dominated partial differential equations. Starting from the standard Galerkin finite element method, the low-order discretization is constructed by performing row-sum mass lumping and elimination of negative off-diagonal entries from the discrete transport operator by adding a proper amount of artificial diffusion. This technique termed discrete upwinding [144] amounts to looping over pairs of off-diagonal matrix coefficients and leads to the definition of discrete diffusion operators which can be safely applied without violating mass conservation principles. The difference between the high-and low-order residuals is decomposed into sums of antidiffusive fluxes which need to be limited in order to prevent the formation of non-physical oscillations which would be generated if unconstrained antidiffusion was applied. A node-based limiting strategy is employed which is inspired by Zalesak's multidimensional flux corrected transport algorithm . All manipulations required to construct high-resolution schemes of this type are performed on the matrix level, and therefore, algebraic flux correction can be applied to discrete transport operators of any origin. In this thesis, the focus is placed on implicit high-resolution finite element schemes based on the concept of algebraic flux correction which are applicable to unstructured meshes and complex domains. This is in contrast to many flow solvers commonly in use which are typically based on finite volume or discontinuous Galerkin finite element approximations making use of fully explicit time-stepping schemes. Explicit time integration techniques such as the forward Euler method may require some extra stabilization, and moreover, the size of the time step is subject to a restrictive CFL condition. In short, the largest admissible global time step is dictated by local CFL numbers which are likely to become very restrictive for locally refined meshes. In contrast, implicit time discretizations can be operated at larger time steps since stability conditions (if any) are less restrictive. However, the solution of an algebraic system of equations in each step is mandatory so that implicit schemes are frequently denounced as being non-competitive and their explicit counterparts are preferred. Furthermore, application of algebraic flux correction demands for the solution of nonlinear systems even for linear governing equations. This can be accomplished by means of a fixed-point defect correction scheme , whereby the evolution operator for the underlying low-order discretization constitutes a viable preconditioner . On the other hand, convergence rates observed for standard defect correction schemes tend to deteriorate significantly if the time step is increased and convergence may even fail completely. In this work, the algebraic flux correction methodology is equipped with discrete Newton algorithms which lead to an efficient solving of nonlinear algebraic systems. Owing to the fact that the construction of the low-order scheme and the limitation of compensating antidiffusion are performed on the discrete level, there is no continuous counterpart which can be differentiated explicitly and used in standard Newton methods. As a remedy, divided differences are employed to construct an approximation to the Jacobian matrix which can be efficiently assembled edge-by-edge. The use of a node-based flux limiter may give rise to an extended sparsity pattern which is determined a priori by considering standard results from classical graph theory.

arXiv (Cornell University), Feb 10, 2023
In recent years, quantum Boltzmann methods have gained more and more interest as they might provi... more In recent years, quantum Boltzmann methods have gained more and more interest as they might provide a viable path towards solving fluid dynamics problems on quantum computers once this emerging compute technology has matured and fault-tolerant many-qubit systems become available. The major challenge in developing a start-to-end quantum algorithm for the Boltzmann equation consists in encoding relevant data efficiently in quantum bits (qubits) and formulating the streaming, collision and reflection steps as one comprehensive unitary operation. The current literature on quantum Boltzmann methods mostly proposes data encodings and quantum primitives for individual phases of the pipeline assuming that they can be combined to a full algorithm. In this paper we disprove this assumption by showing that for encodings commonly discussed in literature either the collision or the streaming step cannot be unitary. Building on this landmark result we propose a novel encoding in which the number of qubits used to encode the velocity depends on the number of time steps one wishes to simulate, with the upper bound depending on the total number of grid points. In light of the non-unitarity result established for existing encodings, our encoding method is to the best of our knowledge the only one currently known that can be used for a start-to-end quantum Boltzmann solver where both the collision and the streaming step are implemented as a unitary operation. Furthermore our theoretical unitarity result can serve as a guideline on which types of encodings to consider or whether a 'stopand-go' method with repeated measurements and re-initializations is the method of choice.
arXiv (Cornell University), Jan 29, 2020
This paper proposes a shape optimization algorithm based on the principles of Isogeometric Analys... more This paper proposes a shape optimization algorithm based on the principles of Isogeometric Analysis (IGA) in which the parameterization of the geometry enters the problem formulation as an additional PDE-constraint. Inspired by the isoparametric principle of IGA, the parameterization and the governing state equation are treated using the same numerical technique. This leads to a scheme that is comparatively easy to differentiate, allowing for a fully symbolic derivation of the gradient and subsequent gradient-based optimization. To improve the efficiency and robustness of the scheme, the basis is re-selected during each optimization iteration and adjusted to the current needs. The scheme is validated in two test cases.

ECMOR XV - 15th European Conference on the Mathematics of Oil Recovery, Aug 29, 2016
Many enhanced oil recovery (EOR) processes can be described using partial differential equations ... more Many enhanced oil recovery (EOR) processes can be described using partial differential equations with parameters that are strongly non-linear functions of one or more of the state variables. Typically these nonlinearities result in solution components changing several orders of magnitude over small spatial or temporal distances. The numerical simulation of such processes with the aid of finite volume or finite element techniques poses challenges. In particular, temporally oscillating state variable values are observed for realistic grid sizes when conventional discretization schemes are used. These oscillations, which do not represent a physical process but are discretization artifacts, hamper the use of the forward simulation model for optimization purposes. To analyze these problems, we study the dynamics of a simple foam model describing the interaction of water, gas and surfactants in a porous medium. It contains sharp gradients due to the formation of foam. The simplicity of the model allows us to gain a better understanding of the underlying processes and difficulties of the problem. The foam equations are discretized by a first-order finite volume method. Instead of using a finite volume method with a standard interpolation procedure, we opt for an integral average, which smooths out the discontinuity caused by foam generation. We introduce this method by applying it to the heat equation with discontinuous thermal conductivity. A similar technique is then applied to the foam model, reducing the oscillations drastically, but not removing them.

Computer Methods in Applied Mechanics and Engineering, Apr 1, 2020
This paper presents a novel spline-based meshing technique that allows for usage of boundary-conf... more This paper presents a novel spline-based meshing technique that allows for usage of boundary-conforming meshes for unsteady flow and temperature simulations in co-rotating twin-screw extruders. Spline-based descriptions of arbitrary screw geometries are generated using Elliptic Grid Generation. They are evaluated in a number of discrete points to yield a coarse classical mesh. The use of a special control mapping allows to fine-tune properties of the coarse mesh like orthogonality at the boundaries. The coarse mesh is used as a 'scaffolding' to generate a boundary-conforming mesh out of a fine background mesh at run-time. Storing only a coarse mesh makes the method cheap in terms of memory storage. Additionally, the adaptation at run-time is extremely cheap compared to computing the flow solution. Furthermore, this method circumvents the need for expensive re-meshing and projections of solutions making it efficient and accurate. It is incorporated into a space-time finite element framework. We present time-dependent test cases of non-Newtonian fluids in 2D and 3D for complex screw designs. They demonstrate the potential of the method also for arbitrarily complex industrial applications.

Journal of Computational Physics, Apr 1, 2018
A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is prese... more A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) 2 -projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitraryorder spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.

Zenodo (CERN European Organization for Nuclear Research), May 18, 2022
Triangulated meshes discretized from commercial CAD applications often possess a considerable lev... more Triangulated meshes discretized from commercial CAD applications often possess a considerable level of complexity. However, when conducting external aerodynamics simulations at an earlier design stage, these meshes are way too complex and contain complex features and topological holes. We propose a practical and fast algorithm to shrink wrap triangulated surfaces with the sole intent of topology and surface simplification. Building upon the concepts of mathematical morphology and newer advancements in geometry processing, such as generalized winding numbers, we show that it is possible to build a straightforward and robust algorithm that can guarantee genus-zero surfaces. Our approach uses a Cartesian background mesh (fixed and adaptive) to approximate an input triangulated surface's interior and exterior volume. We use an octree data structure for adaptive mesh refinement. Although we demonstrate our algorithm exclusively on triangulated meshes, they are equally applicable to general polyhedral meshes. They are also well suited for handling point clouds (oriented and unoriented), and we show some examples of the same with some unoriented point clouds. We built our algorithms with a wide variety of applications in mind. However, we showcase the applicability of our algorithms for aerodynamic simulations, fluid volume extraction, and surface simplification. We also emphasize the practicality and ease of implementation of the proposed algorithms. We also compare our algorithms with existing literature.

arXiv (Cornell University), Sep 28, 2018
Isogeometric analysis was applied very successfully to many problem classes like linear elasticit... more Isogeometric analysis was applied very successfully to many problem classes like linear elasticity, heat transfer and incompressible flow problems but its application to compressible flows is very rare. However, its ability to accurately represent complex geometries used in industrial applications makes IGA a suitable tool for the analysis of compressible flow problems that require the accurate resolution of boundary layers. The convection-diffusion solver presented in this chapter, is an indispensable step on the way to developing a compressible solver for complex viscous industrial flows. It is well known that the standard Galerkin finite element method and its isogeometric counterpart suffer from spurious oscillatory behaviour in the presence of shocks and steep solution gradients. As a remedy, the algebraic flux correction paradigm is generalized to B-Spline basis functions to suppress the creation of oscillations and occurrence of non-physical values in the solution. This work provides early results for scalar conservation laws and lays the foundation for extending this approach to the compressible Euler equations in .

arXiv (Cornell University), Jul 12, 2021
Isogeometric Analysis (IgA) has become a viable alternative to the Finite Element Method (FEM) an... more Isogeometric Analysis (IgA) has become a viable alternative to the Finite Element Method (FEM) and is typically combined with a time integration scheme within the method of lines for time-dependent problems. However, due to a stagnation of processor's clock speeds, traditional (i.e. sequential) time integration schemes become more and more the bottleneck within these large-scale computations, which lead to the development of parallel-in-time methods like the Multigrid Reduced in Time (MGRIT) method . Recently, MGRIT has been succesfully applied by the authors in the context of IgA showing convergence independent of the mesh width, approximation order of the B-spline basis functions and time step size for a variety of benchmark problems. However, a strong dependency of the CPU times on the approximation order was visible when a standard Conjugate Gradient method was adopted for the spatial solves within MGRIT. In this paper we combine MGRIT with a state-of-the-art solver (i.e. a p-multigrid method [3]), specifically designed for IgA, thereby significantly reducing the overall computational costs of MGRIT. Furthermore, we investigate the performance of MGRIT and its scalability on modern copmuter architectures.
arXiv (Cornell University), Jan 7, 2019
Over the years, Isogeometric Analysis has shown to be a successful alternative to the Finite Elem... more Over the years, Isogeometric Analysis has shown to be a successful alternative to the Finite Element Method (FEM). However, solving the resulting linear systems of equations efficiently remains a challenging task. In this paper, we consider a p-multigrid method, in which coarsening is applied in the approximation order p instead of the mesh width h. Since the use of classical smoothers (e.g. Gauss-Seidel) results in a p-multigrid method with deteriorating performance for higher values of p, the use of an ILUT smoother is investigated. Numerical results and a spectral analysis indicate that the resulting p-multigrid method exhibits convergence rates independent of h and p. In particular, we compare both coarsening strategies (e.g. coarsening in h or p) adopting both smoothers for a variety of two and threedimensional benchmarks.
arXiv (Cornell University), 2020
This work extends the high-resolution isogeometric analysis approach established in to the equati... more This work extends the high-resolution isogeometric analysis approach established in to the equations of gas dynamics. The group finite element formulation is adopted to obtain an efficient assembly procedure for the standard Galerkin approximation, which is stabilized by adding artificial viscosities proportional to the spectral radius of the Roe-averaged flux-Jacobian matrix. Excess stabilization is removed in regions with smooth flow profiles with the aid of algebraic flux correction . The underlying principles are reviewed and it is shown that linearized FCT-type flux limiting [3] originally derived for nodal low-order finite elements ensures positivity-preservation for high-order B-Spline discretizations.

arXiv (Cornell University), Mar 31, 2021
In this paper, we present QPack, a universal benchmark for Noisy Intermediate-Scale Quantum (NISQ... more In this paper, we present QPack, a universal benchmark for Noisy Intermediate-Scale Quantum (NISQ) computers based on Quantum Approximate Optimization Algorithms (QAOA). Unlike other evaluation metrics in the field, this benchmark evaluates not only one, but multiple important aspects of quantum computing hardware: the maximum problem size a quantum computer can solve, the required runtime, as well as the achieved accuracy. The applications MaxCut, dominating set and traveling salesman are included to provide variation in resource requirements. This will allow for a diverse benchmark that promotes optimal design considerations, avoiding hardware implementations for specific applications. We also discuss the design aspects that are taken in consideration for the QPack benchmark, with critical quantum benchmark requirements in mind. An implementation is presented, providing practical metrics. QPack is presented as a hardware agnostic benchmark by making use of the XACC library. We demonstrate the application of the benchmark on various IBM machines, as well as a range of simulators.

Springer eBooks, Aug 10, 2020
Isogeometric Analysis can be considered as the natural extension of the Finite Element Method (FE... more Isogeometric Analysis can be considered as the natural extension of the Finite Element Method (FEM) to higher-order spline based discretizations simplifying the treatment of complex geometries with curved boundaries. Finding a solution of the resulting linear systems of equations efficiently remains, however, a challenging task. Recently, p-multigrid methods have been considered , in which a multigrid hierarchy is constructed based on different approximation orders p instead of mesh widths h as it would be the case in classical h-multigrid schemes . The use of an Incomplete LU-factorization as a smoother within the p-multigrid method has shown to lead to convergence rates independent of both h and p for single patch geometries . In this paper, the focus lies on the application of the aforementioned p-multigrid method on multipatch geometries having a C 0continuous coupling between the patches. The use of ILUT as a smoother within p-multigrid methods leads to convergence rates that are essentially independent of h and p, but depend mildly on the number of patches.
The flux-corrected transport (FCT) methodology is generalized to implicit finite element schemes ... more The flux-corrected transport (FCT) methodology is generalized to implicit finite element schemes and applied to the Euler equations of gas dynamics. The underlying low-order scheme is constructed by applying scalar artificial viscosity proportional to the spectral radius of the cumulative Roe matrix. All conservative matrix manipulations are performed edge-by-edge which leads to an efficient algorithm for the matrix assembly. The outer defect correction loop is equipped with a block-diagonal preconditioner so as to decouple the discretized Euler equations and solve all equations individually. As an alternative, a strongly coupled solution strategy is investigated in the context of stationary problems which call for large time steps.

Computational particle mechanics, Mar 26, 2020
The Material Point Method (MPM) is a numerical technique that combines a fixed Eulerian backgroun... more The Material Point Method (MPM) is a numerical technique that combines a fixed Eulerian background grid and Lagrangian point masses to simulate materials which undergo large deformations. Within the original MPM, discontinuous gradients of the piecewise-linear basis functions lead to the so-called grid-crossing errors when particles cross element boundaries. Previous research has shown that B-spline MPM (BSMPM) is a viable alternative not only to MPM, but also to more advanced versions of the method that are designed to reduce the grid-crossing errors. In contrast to many other MPM-related methods, BSMPM has been used exclusively on structured rectangular domains, considerably limiting its range of applicability. In this paper, we present an extension of BSMPM to unstructured triangulations. The proposed approach combines MPM with C 1 -continuous high-order Powell-Sabin spline basis functions. Numerical results demonstrate the potential of these basis functions within MPM in terms of grid-crossing-error elimination and higher-order convergence. Material Point Method • B-splines • Powell-Sabin splines • Unstructured grids • Grid-crossing error B Pascal de Koster

Ocean Dynamics, Dec 8, 2015
In this paper, a three-dimensional semi-idealized model for tidal motion in a tidal estuary of ar... more In this paper, a three-dimensional semi-idealized model for tidal motion in a tidal estuary of arbitrary shape and bathymetry is presented. This model aims at bridging the gap between idealized and complex models. The vertical profiles of the velocities are obtained analytically in terms of the first-order and the second-order partial derivatives of surface elevation, which itself follows from an elliptic partial differential equation. The surface elevation is computed numerically using the finite element method and its partial derivatives are obtained using various methods. The newly developed semi-idealized model allows for a systematic investigation of the influence of geometry and bathymetry on the tidal motion which was not possible in previously developed idealized models. The new model also retains the flexibility and computational efficiency of previous idealized models, essential for sensitivity analysis. As a first step, the accuracy of the semi-idealized model is investigated. To this end, an extensive comparison is made between the model results of the semi-idealized model and two other idealized models: a width-averaged model and a three-dimensional idealized model. Finally, the

arXiv (Cornell University), May 24, 2022
This paper presents the benchmark score definitions of QPack, an application-oriented cross-platf... more This paper presents the benchmark score definitions of QPack, an application-oriented cross-platform benchmarking suite for quantum computers and simulators, which makes use of scalable Quantum Approximate Optimization Algorithm and Variational Quantum Eigensolver applications. Using a varied set of benchmark applications, an insight of how well a quantum computer or its simulator performs on a general NISQ-era application can be quantitatively made. This paper presents what quantum execution data can be collected and transformed into benchmark scores for application-oriented quantum benchmarking. Definitions are given for an overall benchmark score, as well as sub-scores based on runtime, accuracy, scalability and capacity performance. Using these scores, a comparison is made between various quantum computer simulators, running both locally and on vendors' remote cloud services. We also use the QPack benchmark to collect a small set of quantum execution data of the IBMQ Nairobi quantum processor. The goal of the QPack benchmark scores is to give a holistic insight into quantum performance and the ability to make easy and quick comparisons between different quantum computers.

Frontiers in Mechanical Engineering
Structural mechanics is commonly modeled by (systems of) partial differential equations (PDEs). E... more Structural mechanics is commonly modeled by (systems of) partial differential equations (PDEs). Except for very simple cases where analytical solutions exist, the use of numerical methods is required to find approximate solutions. However, for many problems of practical interest, the computational cost of classical numerical solvers running on classical, that is, silicon-based computer hardware, becomes prohibitive. Quantum computing, though still in its infancy, holds the promise of enabling a new generation of algorithms that can execute the most cost-demanding parts of PDE solvers up to exponentially faster than classical methods, at least theoretically. Also, increasing research and availability of quantum computing hardware spurs the hope of scientists and engineers to start using quantum computers for solving PDE problems much faster than classically possible. This work reviews the contributions that deal with the application of quantum algorithms to solve PDEs in structural m...
83rd EAGE Annual Conference & Exhibition
Quantum computing could be a potential game-changer in industry sectors relying on the efficient ... more Quantum computing could be a potential game-changer in industry sectors relying on the efficient solutions of large-scale global optimization problems. Exploration geoscience, is full of optimization problems and hence is a good candidate for application of quantum computing. It was recently suggested that quantum annealing, a form of adiabatic quantum computer, is a much better suited quantum computing platform for optimization problems than gate-based quantum computing. In this work, we show how the residual statics estimation problem can be solved on the quantum annealer and present our first results obtained on a quantum computer.
Uploads
Papers by Matthias Moller