Edinburgh Parallel Computing Centre 3 Table of Contents
发布时间:2024-11-28
发布时间:2024-11-28
Version 1.0
Mesh Generation
Mark Filipiak
Edinburgh Parallel Computing CentreThe University of Edinburgh
Version 1.0
November 1996ecc
Version 1.0
Version 1.0
Table of Contents
1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1.1Discretisation and mesh type . . . . . . . . . . . . . . . . . . .6
1.2Mesh characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Structured meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2.1Boundary-fitted meshes. . . . . . . . . . . . . . . . . . . . . . . .9
2.2Problem solution on curved grids . . . . . . . . . . . . . .10
2.3Boundary fitting grids on a single block. . . . . . . . .11
2.4Algebraic grid generation: interpolation. . . . . . . . .11
2.4.1Transfinite interpolation (TFI). . . . . . . . . . . . 11
2.5PDE grid generation. . . . . . . . . . . . . . . . . . . . . . . . . .14
2.5.1Elliptic grid generation. . . . . . . . . . . . . . . . . . 14
2.6Implementation in 3D . . . . . . . . . . . . . . . . . . . . . . . .16
2.7Other methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
2.7.1Hyperbolic grid generation. . . . . . . . . . . . . . 17
2.7.2Parabolic grid generation. . . . . . . . . . . . . . . . 17
2.8Multiblock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
2.8.1C, O, H grids . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.8.2Multiblock . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Unstructured Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
3.1Mesh requirements for the Finite Element Method23
3.2Mesh generation methods. . . . . . . . . . . . . . . . . . . . .24
3.2.1Decomposition and mapping . . . . . . . . . . . . 24
3.2.2Grid based methods . . . . . . . . . . . . . . . . . . . . 25
3.2.3Advancing front . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.4Delaunay triangulation . . . . . . . . . . . . . . . . . 31
3.2.5Other methods. . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.6Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Adaptive Meshing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
4.1Adaptive meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
4.2Parallel mesh generation . . . . . . . . . . . . . . . . . . . . . .38
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .392345
Edinburgh Parallel Computing Centre3
Version 1.0
4Technology Watch Report
Version 1.0
1 Introduction
Continuous physical systems, such as the airflow around an aircraft, the stress con-
centration in a dam, the electric field in an integrated circuit, or the concentration of
reactants in a chemical reactor, are generally modelled using partial differential equa-
tions. To perform simulations of these systems on a computer, these continuum equa-
tions need to be discretised, resulting in a finite number of points in space (and time)
at which variables such as velocity, density, electric field are calculated. The usual
methods of discretisation, finite differences, finite volumes and finite elements, use
neighbouring points to calculate derivatives, and so there is the concept of a mesh or
grid on which the computation is performed.
There are two mesh types, characterised by the connectivity of the points. Structured
meshes have a regular connectivity, which means that each point has the same
number of neighbours (for some grids a small number of points will have a different
number of neighbours). Unstructured meshes have irregular connectivity: each point
can have a different number of neighbours. Figure 1 gives an example of each type of
grid. In some cases part of the grid is structured and part unstructured (e.g., in vis-
cous flows where the boundary layer could be structured and the rest of the flow
unstructured).
Figure 1: Structured mesh (left) and unstructured mesh (right).
In the rest of this chapter, the various discretisation methods are described, with their
mesh requirements. Chapter 2 describes the methods used to generate structured
meshes in simple domains (algebraic and elliptic methods) and the extension to com-
plex domans usng multblock. Chapter 3 descrbes methods used to generate
unstructured meshes, concentrating on the two main methods for producing triangu-
lar/tetrahedral meshes: advancing front and Delaunay triangulation. Chapter 4 gives
a very brief introduction to adaptive meshing.
Edinburgh Parallel Computing Centre5
Version 1.0
1.1 Discretisation and mesh type
The main discretisation methods are finite differences [8], finite volumes (which isequivalent to finite differences) [9] and finite elements [10]. To illustrate the methods,we consider the conservation form of the convection equation
ρ+ (ρU)=Swhereρ is the density,U is the velocity, andS is a source term. TheρU term is theflux ofρ.
The finite difference formulation approximates the derivatives using neighbouringpoints, e.g., for a regular, rectangular grid with spacingh in thex direction
ρ1≈--[ρ(xn+1)–ρ(xn)]h.
Although irregular gridscan be used for finite differences, regular grids are invariablyused. These lead to smple dfference schemes and effcent solvers (on vectormachines, for example). There can be problems with the finite difference scheme atcoordinate singularities for certain regular grids (e.g., spherical polar coordinates)
[12]
In the finite volume formulation, the physical space is split up into small volumesVand the partial differential equations integrated over each of these volumes:
dρd +V∫∫(ρU) ndΓ=∫Sd °V V
Then the variables are approximated by their average values in each volume, and thefluxes through the surfaces of each volume are approximated as function of the varia-bles in neighbouring volumes.
Finite volume discretisation can use both regular and irregular meshes. In an irregu-lar mesh the fluxes through the surfaces are still well defined.
The finite element method also splits the spaces up into small volumes, the ‘elements’.In each element, the variables and fluxes are approximated using weighting func-tions. The computational variables are the coefficients of these weighting (or ‘shape’)functions.
Finite element methods generally use irregular meshes: there is no special advantagein using regular meshes.
1.2 Mesh characteristics
For all types of meshes, there are certain characteristics that we want to control. The local density of points. High density gives more accuracy, but computationtakes longer. This leads to adaptive meshing methods , treated very briefly in6Technology Watch Report
Version 1.0
Chapter 4.
The smoothness of the point distribution. Large variations in grid density or
shape can cause numerical diffusion or anti-diffusion and dispersion or refrac-
tion of waves. This can lead to inaccurate results or instability [12].
The shape of the grid volumes. For instance, boundary layers in fluid flow re-
quire a grid that is very compressed normal to the flow direction. In the finite
element method using triangular elements the maximum angle must be bound-
ed strictly belowπ to prove convergence of the method as the element size is
reduced [11].
For simple domains, the choice between regular or irregular meshes is governed
mainly by the discretisation method. However, for complex domains (e.g., the inside
of a hydro-electric turbine) irregular meshes are preferred to regular meshes. Irregu-
lar mesh generation (at least for triangular or tetrahedral elements) can be fully auto-
matic and fast. Regular mesh generation requires the domain to be split up into
simple blocks which are then meshed automatically. This block decomposition is at
best semi-automatic and can require man-months of user effort.
Edinburgh Parallel Computing Centre7
Version 1.0
8Technology Watch Report
Version 1.0
2 Structured meshes
This chapter begins with a discussion of boundary- tted grids and the discretisation
of PDEs on them, then deals with the main methods for grid generation of simple
domains (using algebraic or differential equation methods) and then explains the
multiblock concept used for more complicated domains. References [12] and [13] are
detailed expositions of structured mesh generation.
2.1 Boundary- tted meshes
Structured meshes are characterised by regular connectivity, i.e., the points of the grid
can be indexed (by 2 indices in 2D, 3 indices in 3D) and the neighbours of each point
can calculated rather than looked up (e.g., the neighbours of the point(i,j)are at
(i+1,j),(
i–1,j), etc.). Meshes on a rectangular domain are trivial to generate
(though come care needs to be taken in the discretisation at convex corners) and struc-
tured mesh generation techniques concentrate on meshing domains with irregular
boundaries, e.g., the ow in blood vessels, or the deformation, ow and heat transfer
in metal being formed in dies. Generally the meshes are generated so that they t the
boundaries, with one coordinate surface forming (part of) the boundary. This gives
accurate solutions near the boundary and enables the use of fast and accurate solvers.
For uid ow these grids allow the easy application of turbulence models, which usu-
ally require the grid to be aligned with the boundary. The alternative is to use a rec-
tangular grid which is clipped at the boundary, with local grid re nement near sharp
features on the boundary (‘Cartesian grids’). This will reduce the truncation order at
the boundary and will require the mesh cells to be clipped at the boundary, increasing
the complexity of the solver (even for a nite volume code). Cartesian grid generation
is very fast and has been used for Euler aerodynamics [14]; it does not appear to be
applicable to Navier-Stokes aerodynamics.
The most common method of generating boundary- tting grids is to have one contin-
uous grid that ts to all the boundaries. The effect is to t a contiguous set of rectan-
gular computational domains to a physical domain with curved boundaries (Figure
2).
Figure 2: Two-block boundary- tted grid
It is dif cult to t complex domains with one mapping from a rectangular computa-
tional domain without generating excessively skewed grids. To get round this prob-
Edinburgh Parallel Computing Centre9
Version 1.0
lem the domain is split up into blocks and each block is gridded, with some continuity
requirements at the block interfaces; this ismultiblock. The decomposition of the
domain into blocks is usually done manually using CAD techniques and is slow.
An alternative to continuous boundary- tted grids with multiple blocks is to use a
boundary tting grid near each boundary, and simple rectangular grid in the interior,
and interpolate between them. These are calledoverset orchimera grids (Figure 3)[15].
Figure 3: Overset grids
This type of grid is easier to generate than a multiblock grid since each grid is local
and doesn’t need to match to others. The individual grids will generally be of high
quality (low distortion). However, the interpolation can be dif cult, especially with
more than two grids overlapping, and it adds to the solver time (increasing it by 10%-
30%). The overlapping grids cannot be too different in resolution and this can cause
problems with the grids required for boundary layers in viscous ow simulations.
Chimera grids are very useful for moving boundaries (e.g., helicopter blades) or mul-
tiple boundaries (e.g., particles in a uid). Most of the grid remains xed but the inter-
polation changes as the grids move with the boundaries.
Chimera grids do have certain advantages, and the recent work [16] on conservative
interpolation methods have increased their usefulness. However, the bulk of struc-
tured grid generation is based on multiblock type grids, and we will concentrate on
these for the rest of this chapter.
2.2 Problem solution on curved grids
Once the grid is generated, the physical problem has to be discretised and solved on
this grid. It is possible to calculate nite difference or nite volume formulas using the
physical grid directly, but this will reduce the truncation order of the FD/FV scheme
since the grids are generally non-uniform. The preferred method is to transform the
equations used to model the problem into computational space. Since the physical
grid is de ned as some transformation from the rectangular computational grid, this
process is straightforward. The resulting equations in the computational space will
include the effects of the coordinate transformation, e.g., time derivatives of moving
grids. The nite differencing can be done on the transformed equations, on the com-
putational grid. Since the computational grid is usually completely regular, high
order methods can be used, and the truncation order is preserved if the resulting FD/
FV equations are transformed back to the physical grid. (Flow solvers can be devel-
oped to use chimera grids, with the extra book-keeping required to interpolate
between each grid.)
Although the order of FD/FV methods can be preserved, there are other effects of cur-
vilinear grids on the accuracy of the solution.
10Technology Watch Report
Version 1.0
Grid size variation gives numerical diffusion (or anti-diffusion leading to insta-bility). The effect is worst with small grid size and large solution gradients.Non-orthogonality, or skewness, of the (physical) grid increases the coef cient
of the truncation error, but doesn’t change the order. Skewness of up to45° are
acceptable in the centre of the grid, but one-sided differences are used at bound-
aries, where orthogonality should be maintained. These limits on skewness are
the main reason that multiblock methods are used.
In a multiblock scheme, the corner junctions between blocks are usually points
with non-standard connectivity (e.g., in 2D, 5 grid lines from a point rather than
4) and these need special treatment (e.g., for nite volume discretisation these
points need to be discretised in physical space).
2.3 Boundary tting grids on a single block
For simple domains, a single grid can be used without leading to excessive skewness.
The computational space is a rectangle or cuboid, or at least has a rectangular bound-
ary, with a regular, rectangular grid. We then need to de ne a 1-1 mapping from the
computational space to the physical space. The methods currently used are
algebraic grid generation, where the grid is interpolated from the physicalboundaries,methods where a PDE for the grid point coordinates is solved,variational grid generation, where some functional, e.g., smoothness, is maxim-
ised/minimised (variational grid generation is not treated in this report, see [13]
for details),
other methods, (not treated in this report), e.g., conformal mapping [12], [13].
2.4 Algebraic grid generation: interpolation
The method used in the algebraic method of grid generation istrans nite interpolation
(TFI).
2.4.1 Trans nite interpolation (TFI)
Working in 2D (the extension to 3D is straightforward), take the computational
domain as the unit square[0,1]×[0,1] with coordinatess andt, and the physical
domain as some region with coordinatesx andy. To generate the grid in the physical
space, we would create a grid in the unit square and map this into the physical
domain. There are two requirements for this mapping (Figure 4):
It must be 1-1, andthe boundaries of the computational space must map into the boundaries of the
physical space,
Edinburgh Parallel Computing Centre11
Version 1.0
but otherwise the mapping can be arbitrary, although it may not produce a good grid.
1
t
Computational
domain
001
s
Figure 4: Mapping of boundaries
TFI is one kind of mapping, where the physical coordinates, treated as a function of
the computational coordinates, are interpolated from their values on the boundaries
of the computational domain. Since the data is given at a non-denumerable number of
points, the interpolation is called trans nite.
The 2D interpolation is constructed as a linear combination of 2 1D interpolations
(also calledprojections) and their product. First de ne the blending functionsφ0,φ1
andθ0,θ1 that will be used to interpolate in each direction.φ0 interpolates values
from thes=0 boundary into the domain,φ1 interpolates values from thes=1
boundary into the domain, and similarly forθ0,θ1 in thet direction. The require-
ments onφ0(s) andφ1(s) areφ0(0)=1,φ0(1)=0
φ1(0)=0,φ1(1)=1
and similarly forθ0,θ1 in thet direction. The simplest blending function is linear,
giving linear interpolation
φ0(s)=1–s
φ1(s)=s
The 1D trans nite interpolations (projections) are formed as follows (for thex coordi-nate)
Ps[x](s,t)=φ0(s)x(0,t)+φ1(s)x(1,t)
Pt[x](s,t)=θ0(t)x(s,0)+θ1(t)x(s,1)
12Technology Watch Report
Version 1.0
and the product projection is
PsPt[x](s,t)=φ0(s)θ0(t)x(0,0)+φ1(s)θ0(t)x(1,0)
+φ0(s)θ1(t)x(0,1)+φ1(s)θ1(t)x(1,1)
which is the nite interpolant for the values ofx at the four corners of the computa-
tional domain. The 2D trans nite interpolation is then
(Ps⊕Pt)[x]=Ps[x]+
Pt[x]–PsPt[x]
which interpolates to the entire boundary. To form the grid in the physical space, this
interpolant is used to map the points of a regular grid in the computational space (Fig-
ure 5).
1
t
001
sts
Figure 5: Trans nite interpolation
It is possible to extend the TFI to so that it interpolates to several coordinate lines, not
just the boundaries. This is useful to control the grid density and also may be required
to ensure a 1-1 mapping (i.e, a valid grid). The blending functions would then be, for
example, cubic splines. It is also possible to match normal derivatives, e.g.dx ds, at
the boundaries. This allows grid slope continuity between the blocks of a multiblock
grid.
There are some problems with TFI. The mapping will propagate boundary singulari-
ties (corners) into the interior of the domain, which is not ideal for uid ow simula-
Edinburgh Parallel Computing Centre13
Version 1.0
tions. A more serious problem is that if the mapping is not 1-1 then overspill occurs
(Figure 6).
1
t
001
sts
Figure 6: Overspill
This can be corrected be re-parameterising the boundaries, or adding constraint lines
(surfaces in 3D) inside the domain.
However, TFI grid generation is very fast and is the only competitive method for 3D.
It is possible to use an PDE based generator to re ne a grid produced using TFI.
2.5 PDE grid generation
The idea behind these methods is to de ne PDEs for the physical space coordinates in
terms of the computational space coordinates, and then solve these equations on a
grid in computational space to create a grid in physical space. The main type of PDE
used is elliptic, which is suitable for domains with closed boundaries (for unbounded
domains, a ctitious boundary at large distances is used). The alternatives are hyper-
bolic and parabolic equations, which are used for unbounded domains.
2.5.1 Elliptic grid generation
These methods start with an PDE for the computational space coordinatesξ andξ
1212in terms of the physical space coordinatesx andx (the extension to 3D is straight-
forward). The simplest example is the Laplace equation
ξ=02i
14Technology Watch Report
Version 1.0
Parts of the physical boundary, corresponding to coordinate lines, are then ‘equipo-
tential surfaces’ (Figure 7).
0ξ ξ constant
1 Figure 7: Elliptic grid generation boundaries
In 2D, the extremum principle guarantees that the mapping is 1-1, and the Laplace
operator is smoothing so that boundary slope discontinuities are not propagated into
the interior.
In order to solve the equation in its current form, you would already need a grid in
the physical domain, so the equation is transformed so thatx are the dependent var-
iables, andξ are the independent variables
Using the equation for ξ in the relation2kii
=ggives2ijξξi 2k j+ ξ ξk=0
gijijξξij=0andg (the metric tensor) can be expressed in terms ofgij=i j (see [12] forξξ
details).
This quasi-linear equation can then be solved in the rectangularξ domain, with the
location of the points on the physical boundaries as the boundary data at the points
on the computational boundaries. Any standard solver can be used, and an algebrai-
cally generated grid can be used as the initial guess to ensure/increase convergence.
Grids based on the Laplace equation are not very exible. The coordinate lines tend to
be equally spaced, except near convex or concave boundaries, and it is not possible to
control the grid line slope at the boundaries (since this is a 2nd order equation).
To control the grid spacing and slope, we can add a source term (control function), giv-
ing Poisson’s equationi
ξ=P2ii
Edinburgh Parallel Computing Centre15
Version 1.0
The effect of the control function is shown in Figure 8
ξ1ξ1
P < 02ξ0
P < 01ξ0
Figure 8: Effect of control functions
The metric tensor is factored out of the source terms to give, in general
ξ=gPjk
The transformed equations now become2ijki
ijk =0g P+ ξiξjijξk
Usually only one-dimensional stretching in each direction is required; then the source
term isgPii (no sum). In theory, the source terms are de ned in terms of the physical
coordinates, but in practice they are de ned in terms of the computational coordi-
nates, so they will cause attraction to/repulsion from grid points rather than xed
physical points. This makes the direct (manual) use of control functions rather dif -
cult and they tend to be used indirectly to achieve a variety of effects
Because of the strong smoothing, the coordinate lines will tend to be equally
spaced in the interior of the domain. To get the boundary point spacing to prop-
agate into the interior, source terms can be derived from boundary data to give
the desired spacing, and then interpolated (using trans nite interpolation)
The point spacing is xed on the boundary, and since the grid generation equa-
tions are 2nd order, it is not possible to control the coordinate line slope at the
boundary as well. One possible solution is to use the biharmonic equation rather
than Laplace’s equation, but the preferred method is to use control functions. In
Steger and Sorenson’s method [12] the control functions are iteratively adjusted
as the elliptic equation is solved, and the grid line slope at the boundary, the grid
point spacing along the boundary, and the spacing of the rst parallel grid line
from the boundary, can all be controlled. This can be used to give a grid that is
orthogonal at the boundary (e.g., for boundary layers). It can also be used to give
slope and spacing continuity for patched grids (for embedded grids in multi-
block, but see later for a better but slower method of matching at block bound-
aries).iii
2.6 Implementation in 3D
In 3D we have a cuboid as the computational domain and the generation process pro-
ceeds in the order: edges, surfaces, volume. First we choose the computational grid.The gridding of the edges is calculated using a 1D Poisson equation. Then the bound-
16Technology Watch Report