University of Twente
Applied Mathematics
We present a framework for the computational assessment and comparison of large-eddy simulation methods. We apply this to large-eddy simulation of homogeneous isotropic decaying turbulence using a Smagorinsky subgrid model and investigate... more
We present a framework for the computational assessment and comparison of large-eddy simulation methods. We apply this to large-eddy simulation of homogeneous isotropic decaying turbulence using a Smagorinsky subgrid model and investigate the combined effect of discretization and model errors at coarse subgrid resolutions. We compare four different central finite-volume methods. These discretization methods arise from the four possible combinations that can be made with a second-order and a fourth-order central scheme for either the convective and the viscous fluxes. By systematically varying the simulation resolution and the Smagorinsky coefficient, we determine parameter regions for which a desired number of flow properties is simultaneously predicted with approximately minimal error. We include both physics-based and mathematicsbased error definitions, leading to different error-measures designed to emphasize either errors in large-or in small-scale flow properties. It is shown that the evaluation of simulations based on a single physics-based error may lead to inaccurate perceptions on quality. We demonstrate however that evaluations based on a range of errors yields robust conclusions on accuracy, both for physics-based and mathematics-based errors. Parameter regions where all considered errors are simultaneously near-optimal are referred to as 'multi-objective optimal' parameter regions. The effects of discretization errors are particularly important at marginal spatial resolution. Such resolutions reflect local simulation conditions that may also be found in parts of more complex flow simulations. Under these circumstances, the asymptotic error-behavior as expressed by the order of the spatial discretization is no longer characteristic for the total dynamic consequences of discretization errors. We find that the level of overall simulation errors for a second-order central discretization of both the convective and viscous fluxes (the '2-2' method), and the fully fourth-order ('4-4') method, is equivalent in their respective 'multi-objective optimal' regions. Mixed order methods, i.e. the '2-4' and '4-2' combinations, yield errors which are considerably higher.
A block structured compressible flow solver based on a finite volume approach with central spatial differencing is described and its performance in 2D on flow around an airfoil is studied. Variations in the number and dimensions of the... more
A block structured compressible flow solver based on a finite volume approach with central spatial differencing is described and its performance in 2D on flow around an airfoil is studied. Variations in the number and dimensions of the blocks do not influence the convergence behavior nor the solution, irrespective of the relative positions of a possible shock and the block-interfaces. Mixed calculations, in which the governing equations, either Euler or Reynolds averaged Navier-Stokes, differ per block, give accurate results provided the Euler blocks are defined outside the boundary layer and or in the far field wake region. Likewise, extensive grid distortions near block interfaces can be allowed for outside the boundary layer. Finally, an unbalanced advancement in time, in which each block is advanced independently over several time steps gives no serious decrease in convergence rate.
Mathematical regularisation of the nonlinear terms in the Navier-Stokes equations provides a systematic approach to deriving subgrid closures for numerical simulations of turbulent flow. By construction, these subgrid closures imply... more
Mathematical regularisation of the nonlinear terms in the Navier-Stokes equations provides a systematic approach to deriving subgrid closures for numerical simulations of turbulent flow. By construction, these subgrid closures imply existence and uniqueness of strong solutions to the corresponding modelled system of equations. We will consider the large eddy interpretation of two such mathematical regularisation principles, i.e., Leray and LANS−α regularisation. The Leray principle introduces a smoothed transport velocity as part of the regularised convective nonlinearity. The LANS−α principle extends the Leray formulation in a natural way in which a filtered Kelvin circulation theorem, incorporating the smoothed transport velocity, is explicitly satisfied. These regularisation principles give rise to implied subgrid closures which will be applied in large eddy simulation of turbulent mixing. Comparison with filtered direct numerical simulation data, and with predictions obtained from popular dynamic eddy-viscosity modelling, shows that these mathematical regularisation models are considerably more accurate, at a lower computational cost. Particularly, the capturing of flow features characteristic of the smaller resolved scales improves significantly. Variations in spatial resolution and Reynolds number establish that the Leray model is more robust but also slightly less accurate than the LANS−α model. The LANS−α model retains more of the small-scale variability in the resolved solution. This requires a corresponding increase in the required spatial resolution. When using second order finite volume discretisation, the potential accuracy of the implied LANS−α model is found to be realized by using a grid spacing that is not larger than the length scale α that appears in the definition of this model.
Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the... more
Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the convective nonlinearity in the Navier-Stokes equations. The regularization is translated into its corresponding sub-filter model to close the equations for large-eddy simulation (LES). The accuracy with which primary turbulent flow features are captured by this modeling is investigated for the Leray regularization, the Navier-Stokes-α formulation (NS-α), the simplified Bardina model and a modified Leray approach. On a PDE level, each regularization principle is known to possess a unique, strong solution with known regularity properties. When used as turbulence closure for numerical simulations, significant differences between these models are observed. Through a comparison with direct numerical simulation (DNS) results, a detailed assessment of these regularization principles is made. The regularization models retain much of the small-scale variability in the solution. The smaller resolved scales are dominated by the specific sub-filter model adopted. We find that the Leray model is in general closest to the filtered DNS results, the modified Leray model is found least accurate and the simplified Bardina and NS-α models are in between, as far as accuracy is concerned. This rough ordering is based on the energy decay, the Taylor Reynolds number and the velocity skewness, and on detailed characteristics of the energy dynamics, including spectra of the energy, the energy transfer and the transfer power. At 6 International Collaboration for Turbulence Research (ICTR).
A database of decaying homogeneous, isotropic turbulence is constructed including reference direct numerical simulations at two different Reynolds numbers and a large number of corresponding large-eddy simulations at various subgrid... more
A database of decaying homogeneous, isotropic turbulence is constructed including reference direct numerical simulations at two different Reynolds numbers and a large number of corresponding large-eddy simulations at various subgrid resolutions. Errors in large-eddy simulation as a function of physical and numerical parameters are investigated. In particular, employing the Smagorinsky subgrid parametrization, the dependence of modeling and numerical errors on simulation parameters is quantified. The interaction between these two basic sources of error is shown to lead to their partial cancellation for several flow properties. This leads to a central paradox in large-eddy simulation related to possible strategies that can be followed to improve the accuracy of predictions. Moreover, a framework is presented in which the global parameter dependence of the errors can be classified in terms of the ''subgrid activity'' which measures the ratio of the turbulent to the total dissipation rate. Such an analysis allows one to quantify refinement strategies and associated model parameters which provide optimal total simulation error at given computational cost.
A methodology is proposed for the assessment of error dynamics in large-eddy simulations. It is demonstrated that the optimization of model parameters with respect to one flow property can be obtained at the expense of the accuracy with... more
A methodology is proposed for the assessment of error dynamics in large-eddy simulations. It is demonstrated that the optimization of model parameters with respect to one flow property can be obtained at the expense of the accuracy with which other flow properties are predicted. Therefore, an approach is introduced which allows to assess the total errors based on various flow properties simultaneously. We show that parameter settings exist, for which all monitored errors are "near optimal," and refer to such regions as "multi-objective optimal parameter regions." We focus on multi-objective errors that are obtained from weighted spectra, emphasizing both large-as well small-scale errors. These multi-objective optimal parameter regions depend strongly on the simulation Reynolds number and the resolution. At too coarse resolutions, no multi-objective optimal regions might exist as not all error-components might simultaneously be sufficiently small. The identification of multi-objective optimal parameter regions can be adopted to effectively compare different subgrid models. A comparison between large-eddy simulations using the Lilly-Smagorinsky model, the dynamic Smagorinsky model and a new Re-consistent eddy-viscosity model is made, which illustrates this. Based on the new methodology for error assessment the latter model is found to be the most accurate and robust among the selected subgrid models, in combination with the finite volume discretization used in the present study.
We present a database analysis to obtain a precise evaluation of the accuracy limitations associated with the popular dynamic eddy-viscosity model in large-eddy simulation. We consider decaying homogeneous isotropic turbulence at two... more
We present a database analysis to obtain a precise evaluation of the accuracy limitations associated with the popular dynamic eddy-viscosity model in large-eddy simulation. We consider decaying homogeneous isotropic turbulence at two different Reynolds numbers, i.e., Re = 50 and 100. The large-eddy simulation errors associated with the dynamic model are compared with those arising in the "static" Smagorinsky model. A large number of systematically varied simulations using the Smagorinsky model provides a detailed impression of the dependence of the total simulation error on ͑i͒ the spatial resolution and ͑ii͒ the resolution of the subgrid dissipation length. This error behavior also induces an "optimal refinement trajectory" which specifies the particular Smagorinsky parameter, in terms of the spatial resolution, for which the total error is minimal. In contrast, the dynamic model gives rise to a self-consistently determined "dynamic trajectory" that represents the dependence of the dynamic coefficient on the spatial resolution. This dynamic trajectory is compared with the optimal refinement trajectory as obtained from the full database analysis of the Smagorinsky fluid. It is shown that the dynamic procedure in which the top-hat test filter is adopted, predicts values for the eddy viscosity as function of resolution and Reynolds number, which quite closely follow the main trends established in the optimal refinement trajectory. Furthermore, a sensitivity analysis, including dependency on test-filter width and filter shape, is discussed. Total simulation errors, due to interacting discretization, and modeling errors associated with the dynamic procedure may be a factor 2 higher compared to the optimum; still the dynamic procedure represents one of the very few self-contained and efficient error-reduction strategies when increasing the spatial resolution.
- by Bernard Geurts and +1
- •
- Engineering, Large Eddy Simulation
We present an immersed boundary method based on volume penalization, with which pulsatile flow in a model cerebral aneurysm is simulated. The model aneurysm consists of a curved vessel merged with a spherical cavity. The dominant vortical... more
We present an immersed boundary method based on volume penalization, with which pulsatile flow in a model cerebral aneurysm is simulated. The model aneurysm consists of a curved vessel merged with a spherical cavity. The dominant vortical structures arising in the time-dependent flow are discussed and the evolution of the maximal shear stress in the aneurysm is analyzed. We approximate flow properties of blood by those of an incompressible Newtonian fluid. The flow inside the aneurysm is simulated with the use of a skew-symmetric finite-volume discretization and explicit time-stepping. We focus on effects due to variations in the amplitude of the pulsatile flow as well as due to changes in the Reynolds number (Re) by studying flow at Re = 100, 250 and 500. At Re = 500 a complex timedependence in the shear stress levels is observed, reflecting the lively development of the flow in the model aneurysm in which vortices are created continuously inside the curved vessel and in the spherical cavity of the aneurysm. An increase in the amplitude of the pulsatile flow increases the shear stress levels somewhat, but at Re = 500 the flow is mainly dominated by its intrinsic unsteadiness. Reducing the Reynolds number yields a stronger contribution of the periodic pulsatile flow forcing: at Re = 100 we find a strong dominance of shear stress levels due to the forcing, while at Re = 250 the intrinsic and pulsatile unsteadiness are of comparable importance.
- by Bernard Geurts and +1
- •
- Technology, Incompressible Flow, Cerebral Aneurysm
The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated.... more
The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible, turbulent, temporal mixing layer shows that the eddy-viscosity part of the original Clark model gives rise to an excessive dissipation of energy in the transitional regime. On the other hand, the model gives rise to instabilities if the eddy-viscosity part is omitted and only the "gradient" part is retained. A linear stability analysis of the Burgers equation supplemented with the Clark model is performed in order to clarify the nature of the instability. It is shown that the growth-rate of the instability is infinite in the inviscid limit and that sufficient (eddy-)viscosity can stabilize the model. A model which avoids both the excessive dissipation of the original Clark model as well as the instability of the "gradient" part, is obtained when the dynamic procedure is applied to the Clark model. Large-eddy simulation using this new dynamic Clark model is found to yield satisfactory results when compared with a filtered direct numerical simulation. Compared with the standard dynamic eddy-viscosity model, the dynamic Clark model yields more accurate predictions, whereas compared with the dynamic mixed model the new model provides equal accuracy at a lower computational effort.
quality of LES. The development of computational resources and the corresponding tendency to apply LES-methodologies to turbulent flow problems of significant complexity, such as arise in various applications in technology and in many... more
quality of LES. The development of computational resources and the corresponding tendency to apply LES-methodologies to turbulent flow problems of significant complexity, such as arise in various applications in technology and in many natural flows, makes the issue of assessing and optimizing the quality of LES predictions a timely challenge. Different error sources are present in LES, which are mainly related to physical modeling (especially as regards subgrid scales), to numerical discretization techniques, to boundary-condition treatment, and to grid resolution and design. These errors may interact in a complex non-linear manner, eventually leading to unpredictable and unexpected effects on LES results.
The Rayleigh-Bénard (RB) system is relevant to astro-and geophysical phenomena, including convection in the ocean, the Earth's outer core, and the outer layer of the Sun. The dimensionless heat transfer (the Nusselt number Nu) in the... more
The Rayleigh-Bénard (RB) system is relevant to astro-and geophysical phenomena, including convection in the ocean, the Earth's outer core, and the outer layer of the Sun. The dimensionless heat transfer (the Nusselt number Nu) in the system depends on the Rayleigh number Ra = β gΔ L 3 /(νκ) and the Prandtl number Pr = ν/κ. Here, β is the thermal expansion coefficient, g the gravitational acceleration, Δ the temperature difference between the bottom and top, and ν and κ the kinematic viscosity and the thermal diffusivity, respectively. The rotation rate H is used in the form of the Rossby number Ro = (β gΔ /L)/(2H). The key question is: How does the heat transfer depend on rotation and the other two control parameters: Nu(Ra, Pr, Ro)? Here we will answer this question by giving a summary of our results presented in [1, 2, 3].
- by Bernard Geurts and +1
- •
A large-eddy simulation database of homogeneous isotropic decaying turbulence is used to assess four different LES quality measures that have been proposed in the literature. The Smagorinsky subgrid model was adopted and the... more
A large-eddy simulation database of homogeneous isotropic decaying turbulence is used to assess four different LES quality measures that have been proposed in the literature. The Smagorinsky subgrid model was adopted and the eddy-viscosity 'parameter' C S and the grid spacing h were varied systematically. It is shown that two methods qualitatively predict the basic features of an error landscape including an optimal refinement trajectory. These methods are based on variants of Richardson extrapolation and assume that the numerical error and the modelling error scale with a power of the mesh size. Hence they require the combination of simulations on several grids. The results illustrate that an approximate optimal refinement strategy can be constructed based on LES output only, without the need for DNS data. Comparison with the full error landscape shows the suitability of the different methods in the error assessment for homogeneous turbulence. The ratio of the estimated turbulent kinetic energy error and the 'true' turbulent kinetic energy error calculated from DNS is studied for different Smagorinsky parameters and different grid sizes. The behaviour of this quantity for decreasing mesh size gives further insight into the reliability of these methods.
Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the... more
Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the convective nonlinearity in the Navier–Stokes equations. The regularization is translated into its corresponding sub-filter model to close the equations for large-eddy simulation (LES). The accuracy with which primary
The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the... more
The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which represents 'Leray-regularization'. The LES-α model is compared with similarity and eddy-viscosity models that also use the dynamic procedure. Numerical simulations of a turbulent mixing layer are performed using both a second order, and a fourth order accurate finite volume discretization. The Leray model emerges as the most accurate, robust and computationally efficient among the three LES-α subgrid parameterizations for the turbulent mixing layer. The evolution of the resolved kinetic energy is analyzed and the various subgrid-model contributions to it are identified. By comparing LES-α at different subgrid resolutions, an impression of finite volume discretization error dynamics is obtained.
Mathematical regularisation of the nonlinear terms in the Navier-Stokes equations is found to provide a systematic approach to deriving subgrid closures for numerical simulations of turbulent flow. By construction, these subgrid closures... more
Mathematical regularisation of the nonlinear terms in the Navier-Stokes equations is found to provide a systematic approach to deriving subgrid closures for numerical simulations of turbulent flow. By construction, these subgrid closures imply existence and uniqueness of strong solutions to the corresponding modelled system of equations. We will consider the large eddy interpretation of two such mathematical regularisation principles, i.e.,
The accuracy of large-eddy simulation (LES) of a turbulent premixed Bunsen flame is investigated in this paper. To distinguish between discretization and modeling errors, multiple LES, using different grid sizes h but the same filterwidth... more
The accuracy of large-eddy simulation (LES) of a turbulent premixed Bunsen flame is investigated in this paper. To distinguish between discretization and modeling errors, multiple LES, using different grid sizes h but the same filterwidth , are compared with the direct numerical simulation (DNS). In addition, LES using various values of but the same ratio / h are compared. The chemistry in the LES and DNS is parametrized with the standard steady premixed flamelet for stochiometric methane-air combustion. The subgrid terms are closed with an eddyviscosity or eddy-diffusivity approach, with an exception of the dominant subgrid term, which is the subgrid part of the chemical source term. The latter subgrid contribution is modeled by a similarity model based upon 2 , which is found to be superior to such a model based upon . Using the 2 similarity model for the subgrid chemistry the LES produces good results, certainly in view of the fact that the LES is completely wrong if the subgrid chemistry model is omitted. The grid refinements of the LES show that the results for = h do depend on the numerical scheme, much more than for h = /2 and h = /4. Nevertheless, modeling errors Flow Turbulence Combust (2009) 82:233-248 and discretization error may partially cancel each other; occasionally the = h results were more accurate than the h ≤ results. Finally, for this flame LES results obtained with the present similarity model are shown to be slightly better than those obtained with standard β-pdf closure for the subgrid chemistry.