Presentation is loading. Please wait.

Presentation is loading. Please wait.

JASS 20041 Space Filling Curves Hierarchical Basis.

Similar presentations


Presentation on theme: "JASS 20041 Space Filling Curves Hierarchical Basis."— Presentation transcript:

1 JASS 20041 Space Filling Curves Hierarchical Basis

2 JASS 20042 Agenda 1.Motivation 2.Space filling curves a.Definition b.Hilbert’s space filling curve c.Peano’s space filling curve d.Usage in numerical simulations 3.Hierarchical basis and generating systems a.Standard nodal basis for FEM-analysis b.Hierarchical basis c.Generating systems

3 JASS 20043 1.Motivation Time needed for simulating phenomena depends on both performing the corresponding calculation steps and transferring data to and from the processor(s). –The latter can be optimised by arranging the data in an effective way to avoid cache misses and support optimisation techniques (e.g. pre-fetching). Standard organization in matrices / arrays leads to bad cache behaviour.  Space-Filling Curves –Use (basis) functions for FEM-analysis with which LSE solvers show faster convergence  Hierarchical Basis  Generating System (for higher dimensional problems)

4 JASS 20044 2.aDefinition of Space-Filling Curves We can construct a mapping from a 1-D interval to a n-D interval where n is finite, i.e. we can construct mappings as follow: If the curve of this mapping passes through every point of the target space we call this a “Space-Filling Curve”. We are interested in a continuous mapping. This cannot be bijective, it’s only surjective.

5 JASS 20045 2.aGeometric Interpretation First we partition the interval I into n subintervals and the square  into n sub squares. Now one can easily establish a continuous mapping between these areas. This idea holds for any finite-dimensional target space. It’s not necessary to partition into equally sized areas! 01 1/91/9 2/92/9 0,01,0 1,1 1/3,01/3,0 Remark: When moving a certain distance on the unit interval we can easily estimate the possible position in the target space (~ radius).

6 JASS 20046 The German mathematician David Hilbert (1862-1943) was the first one to give the so far only algebraically described space-filling curves a geometric interpretation. In the 2 dimensional case he splat I into four subintervals and  into four sub squares. This splitting is recursively repeated for the emerging subspaces, leading to the below shown curve. An important fact is the so called full inclusion: whenever we further split a subinterval, the mapping to the corresponding finer sub squares stays in the former interval. Repeating the partitioning symmetrically (for all subspaces) again and again leads to 2 2n subintervals / sub squares in case of a 2-dimensional target space. (n: # of partitioning steps) 2.bHilbert’s Space-Filling Curve 1 23 4 3 21 4 16 3/43/4 1/41/4 2/42/4 4/44/4 0 4 / 16 3 / 16 2 / 16 1 / 16 16 / 16 1 0,0 1,1

7 JASS 20047 2.cPeano’s Space-Filling Curve The Italian mathematician Giuseppe Peano (1858-1932) was the first one who constructed (only algebraically) a space filling curve in 1890. According to him we split the starting domain  into 3 m sub areas. The Peano curve goes always from 0 to 1 in all dimensions. The way we run through the target space is up to us.Two different ways are marked with blue / red numbers. Only the first and the last subspaces are mapped in a fixed way. Recursive partitioning leads to finer curves (compare with Hilbert’s curve). Symmetrical partitioning leads to 3 mn sub areas. m: # dim(  ) n: # of partitioning steps 1/91/9 0 2/92/9 3/93/9 4/94/9 5/95/9 6/96/9 7/97/9 8/98/9 1 0,0 1,1 1 2 3 6 5 4 7 8 9 123 456 789

8 JASS 20048 2.cPeano curve in 3-D We begin with the unit cube. The Peano curve begins in (0,0,0) and ends in (1,1,1). Now we begin to split this cube into finer sub cubes. We can select the dimension we want to begin with. 0,0,0 1,1,1 0,0,0 1,1,1 2 / 3,0,0 1 / 3,1,1

9 JASS 20049 2.cPeano curve in 3-D (continued) Now the emerging sub spaces are split in one on the two remaining dimensions. After splitting the sub spaces in the last dimension we end up with 27 equally sized cubes (symmetrical case, here: 7 cuboids.) 2 / 3,0,0 1, 1 / 3,1 0,0,0 1,1,1 1 / 3,1,1 0,0,0 1,1,1 2 / 3,0,0 1, 1 / 3,1 1, 1 / 3, 1 / 3 2 / 3,0, 2 / 3 1 / 3,1,1

10 JASS 200410 2.dUsage in numerical simulations So far we’ve only spoken about the generation of space filling curves. But how can this be of advantage for the data transfer in numerical simulations? 0,0 1,1 To answer this question let’s have a short look at a simple grid. Within a square we need the values at the corner points of that square. When we go through this square following e.g. Peano’s space filling curve we can build up inner lines of data points, which are processed in a forward or backward order. This scheme corresponds to a stack, on which we can only access the very most upper element (forward: writing to stack, backward: reading from stack). In our example we would need 2 stacks (blue and red).

11 JASS 200411 2.dExample for stack usage Let’s examine what’s happening in the squares. 12 5 4 678 In the first square we need the data points 1,2,5,6. Stacks after processing the first cell: stack 1: stack 2: 56 12 In the second square the points 2,3,6,7 are needed. Stacks after processing the second cell: stack 1: stack 2: 567 123 3 12 16151413 11109 Now we jump to the 5 th square, points 6,7,10,11 have to be accessed. Stacks after processing the fifth cell: stack 1: stack 2: 56 12348121110

12 JASS 200412 2.dUsage with adaptable grids To guarantee the usage of this idea for complex computations it has to hold for adaptable grids, too. We start with the first square on the coarsest grid and recursively refine where needed (remark: full inclusion). This leads to a top-down approach where we first process the cells on the deeper level before finishing the cells on the coarser level.  top-down depth-first (When entering cell 6 first we first process cell 6, then cells 7 to 15 are finished and afterwards we return to the coarse level and proceed with cell 16.) 131514 101112 987 1 16 3 18 5 19 24 17 6

13 JASS 200413 2.dStacks for adaptable grids Points 1 and 2 are used to calculate cells on both coarse and fine level. When working with hierarchical grid we only have the points 1a and 2a. If we would only use two stacks we would have to store the points 3 and 4 to the same stack as points 1a and 2a. When finishing fine cell 9 (fine level) point 4 would lie on top of the stack containing point 2a, cell 3 (coarse level) couldn’t access point 2a. We handle this problem by introducing two new stacks. We use 2 point stacks (0D) and 2 line stacks (1D). When working with a generating system we have different point values on all levels (points 1a,1b and 2a, 2b). 2b 3 4 1a 6 2 5 8 4 9 13 7 2a 283029 313233 363534 252726 222324 212019 181617 151413 101112 312 654 789 1b

14 JASS 200414 2.dStacks in adaptable grids & generating systems As already mentioned, with generating systems we have point values on different levels. To deal with these problems the so far used number of stacks is insufficient. One can show that the number of stacks can be calculated the way shown below. We numerate the points according to their value in the Cartesian coordinate system. The connecting lines e.g. 02 are numerated the following way (element-wise): - if the corresponding point coordinates are equal: take their value - else write a 2 This leads to the following formula for the # of stacks: 3 dim dim = 2  9 stacks (4 point stacks, 4 line stacks, 1 plane stacks = in/output) dim = 3  27 stacks (8 point stacks, 12 line stacks, 6 plane stacks, 1 volume stack = in/output) 0,0 0,1 1,0 1,1 0212 20 21 22 0,0,0 0,0,1 1,0,0 1,0,1 002102 200 201 202

15 JASS 200415 2.dConclusion Peano-curves allow efficient stack usage for n-dimensional problems. (Hilbert- curves show this advantage only for the 2-dimensional case, as the higher dimensional curves don’t show exploitable behaviour.) The number of needed stacks is independent of the problem size.  computation speed-up due to efficient cache usage (CPU pre-fetching unit can efficiently load the needed data when organised in an easy way.) The only drawback is the recursive splitting into 3 n subintervals when using Peano curves which lead to more points, but we gain more than we loose here.

16 JASS 200416 3.aStandard nodal basis for FEM-analysis FEM – analysis uses a set of independent piece-wise linear functions to describe the solution. We can write the function u(x) as follows: u i :function value at position I  i :hut-function which has the value one at position i, zero at the neighbouring positions (i-1, i+1) and is linear between 01 x i-1 xixi x i+1 i th hut-function 01 A set of linear independent hut- functions forms a basis of the solution space. (boundaries not treated)

17 JASS 200417 3.aResulting LSE (Linear System of Equations) We now have a short look on the one-dimensional Poisson equation using the standard discretization approach. The problem condition depends on the ration max / min. max goes towards the row sum (= 2) and min tends towards zero, therefore the resulting condition is rather poor. This causes slow convergence.

18 JASS 200418 3.aDrawbacks of standard approach Slow convergence rate as already shown due to bad eigenvalues. Moreover the condition depends on the problem size! When we have to examine a certain area in more detail, i.e. when we need locally a higher resolution we cannot reuse the already performed work. We have to calculate new hut-functions.  Use hierarchical basis to compensate these two effects.

19 JASS 200419 3.bHierarchical Basis A possibility to deal with the mentioned drawbacks is to use a hierarchical basis. Unlike with the standard approach we now use all the hut-functions on the coarser levels when refining. On finer levels hut-functions are only created at points which don’t define functions on higher levels. These hut-functions still build up a basis. One can uniquely describe the function u(x) by summing up the different hut-functions on the different levels. standard basishierarchical basis

20 JASS 200420 3.bInterpretation of hierarchical basis The picture below shows a simple interpolation of a parabola using standard basis (left side) and hierarchical basis (right side). The coefficients needed for the normal approach are the function values at the corresponding positions. (This is also the reason why one cannot locally refine and reuse older values.) For the hierarchical basis only the first coefficient equals the corresponding function value (= 1 in below example). The coefficients for the finer levels just describe the difference between the real function value and the one described by all previous hut- functions. This supports reuse when locally refining. coefficients: ¾, 1, ¾coefficients: ¼, 1, ¼

21 JASS 200421 3.bRepresentation in hierarchical basis Let’s remember the representation of the function u(x) using the nodal basis: The same function u(x) can be represented using a hierarchical basis: Both representations describe the same function. We can define a mapping transforming the normal representation into the hierarchical one:

22 JASS 200422 3.bTransformation into hierarchical basis When transforming the coefficients of the standard description into the hierarchical basis we use the following formula: u 2k+2 So we can find the entries (-1/2, 1, -1/2) in each row of the transformation matrix H. (We have to use this formula recursively to calculate the coefficients on all levels.) v 2k+1 u 2k u 2k+1 hierarchical difference (coarse – fine level)  coefficient for hierarchical basis

23 JASS 200423 3.bTransformation into hierarchical basis (cont.) Let’s examine as example the parabola defined by the points (0,0), (1,1) and (2,0). We are using 7 equally distributed inner points. coefficients for hierarchical basis (v 1,…,v 7 )’ coefficients for standard basis (u 1,…,u 7 )’ v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 01 u1u1 u2u2 u7u7 01 transformation matrix H

24 JASS 200424 We want to solve more effectively. To do so we replace the standard representation of u by its hierarchical one. 3.bUsage in numerical schemes When we now multiply the equation from the left by T’ we’ll end up with a new equation for v(x) (this will be a very nice one). The transformation into u(x) is then easily performed.

25 JASS 200425 3.b Transforming v(x) into u(x) Again we consider the transformation for our parabola example. coefficients for hierarchical basis (v 1,…,v 7 )’ coefficients for standard basis (u 1,…,u 7 )’ v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 01 u1u1 u2u2 u7u7 01 transformation matrix T

26 JASS 200426 3.b Calculating T’AT Again we consider the transformation at our parabola example. The resulting matrix is diagonal. After multiplying the old right-hand-side of the equation with T’ we can directly read of the result with respect to the hierarchical system v(x). Finally we only have to transform this back to the standard basis u(x).

27 JASS 200427 3.bHierarchical basis in 2 D We can transfer this idea to higher dimensional problems. One can split the dimensions differently. first refine completely one dimension before working on the other always refine both dimensions before going to a finer level Higher dimensional hut-functions are the product of one dimensional hut- functions. refine one direction Possibilities of refinement-order: refine the other direction define hut-function on coarsest level

28 JASS 200428 3.bMatrix T’AT for n-dimensional case For the one dimensional case the matrix T’AT is diagonal. Higher dimensional problems –T’AT is not diagonal any longer –T’AT is still “nicer” than A when using nodal basis better condition  faster convergence

29 JASS 200429 3.bAdvantages of hierarchical basis Possibility to locally refine the system and reuse older values. Faster convergence than the standard approach (optimal for 1-dimensional problems). (We need more sophisticated algorithms to perform the necessary matrix multiplications in an effective way.) Remark: Although hierarchical basis are always much more effective than the standard representation they can be beaten when dealing with higher dimensional problems.  This leads us to generating systems.

30 JASS 200430 So far we restricted ourselves to either nodal or hierarchical hut-functions forming a basis to describe the function u(x). We can also build up u(x) using the necessary linearly independent functions and additionally as much linearly dependent functions as we like to use. This representation is not any longer unique, but we don’t need uniqueness. We are only interested in describing the function u(x). Example: We get the same vector a by either using a unique representation or one of various possible representations in a generating system. 3.cGenerating system

31 JASS 200431 3.cGenerating system The normal hierarchical basis only allows refinement on nodes which aren’t already defined by hut- functions on coarser levels. A generating system refines additionally on already described nodes (red graphs). It can also be thought of as the sum of all nodal hut-functions on all levels of observation. hierarchical basis generating system

32 JASS 200432 3.cGenerating system As already mentioned the condition of a matrix depends on the ratio of max / min. To be precise: min means the smallest non-zero eigenvalue. When using a generating system we introduce lots of dependent lines in the matrix and therefore get a corresponding number of eigenvalues which equal zero. The gain lies in the smallest non-zero eigenvalue. It’s not any longer dependent on the problem size but a constant (independent of dimension). The solution with respect to the generating system is not unique. Depending on the start value for the iterative solver we get a different solution. But when transforming these solutions back into a unique representation we end up with the same value, i.e. the calculated solutions from the generating ansatz depending on different initial values can only vary within the composition of the linear dependent functions.


Download ppt "JASS 20041 Space Filling Curves Hierarchical Basis."

Similar presentations


Ads by Google