Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cornell University, September 17,2002 Ithaca New York, USA The Development of Unstructured Grid Methods For Computational Aerodynamics Dimitri J. Mavriplis.

Similar presentations


Presentation on theme: "Cornell University, September 17,2002 Ithaca New York, USA The Development of Unstructured Grid Methods For Computational Aerodynamics Dimitri J. Mavriplis."— Presentation transcript:

1 Cornell University, September 17,2002 Ithaca New York, USA The Development of Unstructured Grid Methods For Computational Aerodynamics Dimitri J. Mavriplis ICASE NASA Langley Research Center Hampton, VA 23681 USA

2 Cornell University, September 17,2002 Ithaca New York, USA Overview Structured vs. Unstructured meshing approaches Development of an efficient unstructured grid solver –Discretization –Multigrid solution –Parallelization Examples of unstructured mesh CFD capabilities –Large scale high-lift case –Typical transonic design study Areas of current research –Adaptive mesh refinement –Moving and overlapping meshes

3 Cornell University, September 17,2002 Ithaca New York, USA CFD Perspective on Meshing Technology CFD Initiated in Structured Grid Context –Transfinite Interpolation –Elliptic Grid Generation –Hyperbolic Grid Generation Smooth, Orthogonal Structured Grids Relatively Simple Geometries

4 CFD Perspective on Meshing Technology Sophisticated Multiblock Structured Grid Techniques for Complex Geometries Engine Nacelle Multiblock Grid by commercial software TrueGrid.

5 CFD Perspective on Meshing Technology Sophisticated Overlapping Structured Grid Techniques for Complex Geometries Overlapping grid system on space shuttle (Slotnick, Kandula and Buning 1994)

6 Cornell University, September 17,2002 Ithaca New York, USA Unstructured Grid Alternative Connectivity stored explicitly Single Homogeneous Data Structure

7 Cornell University, September 17,2002 Ithaca New York, USA Characteristics of Both Approaches Structured Grids –Logically rectangular –Support dimensional splitting algorithms –Banded matrices –Blocked or overlapped for complex geometries Unstructured grids –Lists of cell connectivity, graphs (edge,vertices) –Alternate discretizations/solution strategies –Sparse Matrices –Complex Geometries, Adaptive Meshing –More Efficient Parallelization

8 Cornell University, September 17,2002 Ithaca New York, USA Discretization Governing Equations: Reynolds Averaged Navier- Stokes Equations –Conservation of Mass, Momentum and Energy –Single Equation turbulence model (Spalart-Allmaras) Convection-Difusion – Production Vertex-Based Discretization –2 nd order upwind finite-volume scheme –6 variables per grid point –Flow equations fully coupled (5x5) –Turbulence equation uncoupled

9 Cornell University, September 17,2002 Ithaca New York, USA Spatial Discretization Mixed Element Meshes –Tetrahedra, Prisms, Pyramids, Hexahedra Control Volume Based on Median Duals –Fluxes based on edges –Single edge-based data-structure represents all element types

10 Cornell University, September 17,2002 Ithaca New York, USA Spatially Discretized Equations Integrate to Steady-state Explicit: –Simple, Slow: Local procedure Implicit –Large Memory Requirements Matrix Free Implicit: –Most effective with matrix preconditioner Multigrid Methods

11 Cornell University, September 17,2002 Ithaca New York, USA Multigrid Methods High-frequency (local) error rapidly reduced by explicit methods Low-Frequence (global) error converges slowly On coarser grid: –Low-frequency viewed as high frequency

12 Cornell University, September 17,2002 Ithaca New York, USA Multigrid Correction Scheme (Linear Problems)

13 Multigrid for Unstructured Meshes Generate fine and coarse meshes Interpolate between un-nested meshes Finest grid: 804,000 points, 4.5M tetrahedra Four level Multigrid sequence

14 Cornell University, September 17,2002 Ithaca New York, USA Geometric Multigrid Order of magnitude increase in convergence Convergence rate equivalent to structured grid schemes Independent of grid size: O(N)

15 Cornell University, September 17,2002 Ithaca New York, USA Agglomeration vs. Geometric Multigrid Multigrid methods: –Time step on coarse grids to accelerate solution on fine grid Geometric multigrid –Coarse grid levels constructed manually –Cumbersome in production environment Agglomeration Multigrid –Automate coarse level construction –Algebraic nature: summing fine grid equations –Graph based algorithm

16 Cornell University, September 17,2002 Ithaca New York, USA Agglomeration Multigrid Agglomeration Multigrid solvers for unstructured meshes –Coarse level meshes constructed by agglomerating fine grid cells/equations

17 Agglomeration Multigrid Automated Graph-Based Coarsening Algorithm Coarse Levels are Graphs Coarse Level Operator by Galerkin Projection Grid independent convergence rates (order of magnitude improvement)

18 Cornell University, September 17,2002 Ithaca New York, USA Agglomeration MG for Euler Equations Convergence rate similar to geometric MG Completely automatic

19 Cornell University, September 17,2002 Ithaca New York, USA Anisotropy Induced Stiffness Convergence rates for RANS (viscous) problems much slower then inviscid flows –Mainly due to grid stretching –Thin boundary and wake regions –Mixed element (prism-tet) grids Use directional solver to relieve stiffness –Line solver in anisotropic regions

20 Directional Solver for Navier-Stokes Problems Line Solvers for Anisotropic Problems –Lines Constructed in Mesh using weighted graph algorithm –Strong Connections Assigned Large Graph Weight –(Block) Tridiagonal Line Solver similar to structured grids

21 Implementation on Parallel Computers Intersected edges resolved by ghost vertices Generates communication between original and ghost vertex –Handled using MPI and/or OpenMP –Portable, Distributed and Shared Memory Architectures –Local reordering within partition for cache-locality

22 Cornell University, September 17,2002 Ithaca New York, USA Partitioning Graph partitioning must minimize number of cut edges to minimize communication Standard graph based partitioners: Metis, Chaco, Jostle –Require only weighted graph description of grid Edges, vertices and weights taken as unity –Ideal for edge data-structure Line Solver Inherently sequential –Partition around line using weigted graphs

23 Cornell University, September 17,2002 Ithaca New York, USA Partitioning Contract graph along implicit lines Weight edges and vertices Partition contracted graph Decontract graph –Guaranteed lines never broken –Possible small increase in imbalance/cut edges

24 Partitioning Example 32-way partition of 30,562 point 2D grid Unweighted partition: 2.6% edges cut, 2.7% lines cut Weigted partition: 3.2% edges cut, 0% lines cut

25 Cornell University, September 17,2002 Ithaca New York, USA Sample Calculations and Validation Subsonic High-Lift Case –Geometrically Complex –Large Case: 25 million points, 1450 processors –Research environment demonstration case Transonic Wing Body –Smaller grid sizes –Full matrix of Mach and CL conditions –Typical of production runs indesign environment

26 Cornell University, September 17,2002 Ithaca New York, USA NASA Langley Energy Efficient Transport Complex geometry –Wing-body, slat, double slotted flaps, cutouts Experimental data from Langley 14x22ft wind tunnel –Mach = 0.2, Reynolds=1.6 million –Range of incidences: -4 to 24 degrees

27 VGRID Tetrahedral Mesh 3.1 million vertices, 18.2 million tets, 115,489 surface pts Normal spacing: 1.35E-06 chords, growth factor=1.3

28 Computed Pressure Contours on Coarse Grid Mach=0.2, Incidence=10 degrees, Re=1.6M

29 Cornell University, September 17,2002 Ithaca New York, USA Spanwise Stations for Cp Data Experimental data at 10 degrees incidence

30 Cornell University, September 17,2002 Ithaca New York, USA Comparison of Surface Cp at Middle Station

31 Computed Versus Experimental Results Good drag prediction Discrepancies near stall

32 Multigrid Convergence History Mesh independent property of Multigrid

33 Parallel Scalability Good overall Multigrid scalability –Increased communication due to coarse grid levels –Single grid solution impractical (>100 times slower) 1 hour soution time on 1450 PEs

34 AIAA Drag Prediction Workshop (2001) Transonic wing-body configuration Typical cases required for design study –Matrix of mach and CL values –Grid resolution study Follow on with engine effects (2003)

35 Cornell University, September 17,2002 Ithaca New York, USA Cases Run Baseline grid: 1.6 million points –Full drag Polars for Mach=0.5,0.6,0.7,0.75,0.76,0.77,0.78,0.8 –Total = 72 cases Medium grid: 3 million points –Full drag polar for each Mach number –Total = 48 cases Fine grid: 13 million points –Drag polar at mach=0.75 –Total = 7 cases

36 Sample Solution (1.65M Pts) Mach=0.75, CL=0.6, Re=3M 2.5 hours on 16 Pentium IV 1.7GHz

37 Drag Polar at Mach = 0.75 Grid resolution study Good comparison with experimental data

38 Comparison with Experiement Grid Drag Values Incidence Offset for Same CL

39 Drag Polars at other Mach Numbers Grid resolution study Discrepancies at Higher Mach/CL Conditions

40 Drag Rise Curves Grid resolution study Discrepancies at Higher Mach/CL Conditions

41 Cornell University, September 17,2002 Ithaca New York, USA Cases Run on ICASE Cluster 120 Cases (excluding finest grid) About 1 week to compute all cases

42 Cornell University, September 17,2002 Ithaca New York, USA Timings on Various Architectures

43 Cornell University, September 17,2002 Ithaca New York, USA Adaptive Meshing Potential for large savings trough optimized mesh resolution –Well suited for problems with large range of scales –Possibility of error estimation / control –Requires tight CAD coupling (surface pts) Mechanics of mesh adaptation Refinement criteria and error estimation

44 Cornell University, September 17,2002 Ithaca New York, USA Mechanics of Adaptive Meshing Various well know isotropic mesh methods –Mesh movement Spring analogy Linear elasticity –Local Remeshing –Delaunay point insertion/Retriangulation –Edge-face swapping –Element subdivision Mixed elements (non-simplicial) Require anisotropic refinement in transition regions

45 Cornell University, September 17,2002 Ithaca New York, USA Subdivision Types for Tetrahedra

46 Cornell University, September 17,2002 Ithaca New York, USA Subdivision Types for Prisms

47 Cornell University, September 17,2002 Ithaca New York, USA Subdivision Types for Pyramids

48 Cornell University, September 17,2002 Ithaca New York, USA Subdivision Types for Hexahedra

49 Cornell University, September 17,2002 Ithaca New York, USA Adaptive Tetrahedral Mesh by Subdivision

50 Cornell University, September 17,2002 Ithaca New York, USA Adaptive Hexahedral Mesh by Subdivision

51 Cornell University, September 17,2002 Ithaca New York, USA Adaptive Hybrid Mesh by Subdivision

52 Cornell University, September 17,2002 Ithaca New York, USA Overlapping Unstructured Meshes Alternative to Moving Mesh for Large Scale Relative Geometry Motion Multiple Overlapping Meshes treated as single data-structure –Dynamic Determination of active/inactive/ghost cells Advantages for Parallel Computing –Obviates dynamic load rebalancing required with mesh motion techniques –Intergrid communication must be dynamically recomputed and rebalanced Concept of Rendez-vous grid (Plimpton and Hendrickson)

53 Cornell University, September 17,2002 Ithaca New York, USA Overlapping Unstructured Meshes Simple 2D transient example

54 Cornell University, September 17,2002 Ithaca New York, USA Conclusions Unstructured mesh technology enabling technology for computational aerodynamics –Complex geometry handling facilitated –Efficient steady-state solvers –Highly effective parallelization Accurate solutions possible for on-design conditions –Mostly attached flow –Grid resolution always an issue Adaptive meshing potential not fully exploited –Refinement criteria require more research Future work to include more physics – Turbulence, transition, unsteady flows, moving meshes


Download ppt "Cornell University, September 17,2002 Ithaca New York, USA The Development of Unstructured Grid Methods For Computational Aerodynamics Dimitri J. Mavriplis."

Similar presentations


Ads by Google