JASS 20041 Space Filling Curves Hierarchical Basis.

Slides:



Advertisements
Similar presentations
Arc-length computation and arc-length parameterization
Advertisements

5.4 Basis And Dimension.
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Isoparametric Elements Element Stiffness Matrices
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Hawkes Learning Systems: College Algebra
Matrices: Inverse Matrix
Finite Element Method (FEM) Different from the finite difference method (FDM) described earlier, the FEM introduces approximated solutions of the variables.
Efficient Storage and Processing of Adaptive Triangular Grids using Sierpinski Curves Csaba Attila Vigh Department of Informatics, TU München JASS 2006,
Systems of Linear Equations
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
Section 2.3 Gauss-Jordan Method for General Systems of Equations
Chapter 3 Steady-State Conduction Multiple Dimensions
HCI 530 : Seminar (HCI) Damian Schofield. HCI 530: Seminar (HCI) Transforms –Two Dimensional –Three Dimensional The Graphics Pipeline.
Linear Transformations
Chapter 5 Orthogonality
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Review of Matrix Algebra
Lecture 9 Interpolation and Splines. Lingo Interpolation – filling in gaps in data Find a function f(x) that 1) goes through all your data points 2) does.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Notes, part 4 Arclength, sequences, and improper integrals.
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Last lecture summary Fundamental system in linear algebra : system of linear equations Ax = b. nice case – n equations, n unknowns matrix notation row.
LIAL HORNSBY SCHNEIDER
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
Dominant Eigenvalues & The Power Method
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Chapter 10 Review: Matrix Algebra
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Pre-Calculus Lesson 7-3 Solving Systems of Equations Using Gaussian Elimination.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Presentation by: H. Sarper
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
V. Space Curves Types of curves Explicit Implicit Parametric.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Page 1 JASS 2004 Tobias Weinzierl Sophisticated construction ideas of ansatz- spaces How to construct Ritz-Galerkin ansatz-spaces for the Navier-Stokes.
Simplex method (algebraic interpretation)
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
JASS 2005 Saint Petersburg Space-Filling Curves An Introduction Presented by Levi Valgaerts.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Quantum One: Lecture Representation Independent Properties of Linear Operators 3.
Elementary Linear Algebra Anton & Rorres, 9th Edition
The swiss-carpet preconditioner: a simple parallel preconditioner of Dirichlet-Neumann type A. Quarteroni (Lausanne and Milan) M. Sala (Lausanne) A. Valli.
CO1301: Games Concepts Dr Nick Mitchell (Room CM 226) Material originally prepared by Gareth Bellaby.
Elliptic PDEs and the Finite Difference Method
Parallel Solution of the Poisson Problem Using MPI
What is the determinant of What is the determinant of
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Mathematical Tools of Quantum Mechanics
CS 450: Computer Graphics PARAMETRIC SPLINES AND SURFACES
Paper_topic: Parallel Matrix Multiplication using Vertical Data.
Finite Element Method. History Application Consider the two point boundary value problem.
Algorithmic Problems in Algebraic Structures Undecidability Paul Bell Supervisor: Dr. Igor Potapov Department of Computer Science
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Copyright © Cengage Learning. All rights reserved. 1 Functions and Limits.
ALGEBRAIC EIGEN VALUE PROBLEMS
L9Matrix and linear equation
Feature space tansformation methods
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Arab Open University Faculty of Computer Studies Dr
Presentation transcript:

JASS Space Filling Curves Hierarchical Basis

JASS Agenda 1.Motivation 2.Space filling curves a.Definition b.Hilbert’s space filling curve c.Peano’s space filling curve d.Usage in numerical simulations 3.Hierarchical basis and generating systems a.Standard nodal basis for FEM-analysis b.Hierarchical basis c.Generating systems

JASS Motivation Time needed for simulating phenomena depends on both performing the corresponding calculation steps and transferring data to and from the processor(s). –The latter can be optimised by arranging the data in an effective way to avoid cache misses and support optimisation techniques (e.g. pre-fetching). Standard organization in matrices / arrays leads to bad cache behaviour.  Space-Filling Curves –Use (basis) functions for FEM-analysis with which LSE solvers show faster convergence  Hierarchical Basis  Generating System (for higher dimensional problems)

JASS aDefinition of Space-Filling Curves We can construct a mapping from a 1-D interval to a n-D interval where n is finite, i.e. we can construct mappings as follow: If the curve of this mapping passes through every point of the target space we call this a “Space-Filling Curve”. We are interested in a continuous mapping. This cannot be bijective, it’s only surjective.

JASS aGeometric Interpretation First we partition the interval I into n subintervals and the square  into n sub squares. Now one can easily establish a continuous mapping between these areas. This idea holds for any finite-dimensional target space. It’s not necessary to partition into equally sized areas! 01 1/91/9 2/92/9 0,01,0 1,1 1/3,01/3,0 Remark: When moving a certain distance on the unit interval we can easily estimate the possible position in the target space (~ radius).

JASS The German mathematician David Hilbert ( ) was the first one to give the so far only algebraically described space-filling curves a geometric interpretation. In the 2 dimensional case he splat I into four subintervals and  into four sub squares. This splitting is recursively repeated for the emerging subspaces, leading to the below shown curve. An important fact is the so called full inclusion: whenever we further split a subinterval, the mapping to the corresponding finer sub squares stays in the former interval. Repeating the partitioning symmetrically (for all subspaces) again and again leads to 2 2n subintervals / sub squares in case of a 2-dimensional target space. (n: # of partitioning steps) 2.bHilbert’s Space-Filling Curve /43/4 1/41/4 2/42/4 4/44/4 0 4 / 16 3 / 16 2 / 16 1 / / ,0 1,1

JASS cPeano’s Space-Filling Curve The Italian mathematician Giuseppe Peano ( ) was the first one who constructed (only algebraically) a space filling curve in According to him we split the starting domain  into 3 m sub areas. The Peano curve goes always from 0 to 1 in all dimensions. The way we run through the target space is up to us.Two different ways are marked with blue / red numbers. Only the first and the last subspaces are mapped in a fixed way. Recursive partitioning leads to finer curves (compare with Hilbert’s curve). Symmetrical partitioning leads to 3 mn sub areas. m: # dim(  ) n: # of partitioning steps 1/91/9 0 2/92/9 3/93/9 4/94/9 5/95/9 6/96/9 7/97/9 8/98/9 1 0,0 1,

JASS cPeano curve in 3-D We begin with the unit cube. The Peano curve begins in (0,0,0) and ends in (1,1,1). Now we begin to split this cube into finer sub cubes. We can select the dimension we want to begin with. 0,0,0 1,1,1 0,0,0 1,1,1 2 / 3,0,0 1 / 3,1,1

JASS cPeano curve in 3-D (continued) Now the emerging sub spaces are split in one on the two remaining dimensions. After splitting the sub spaces in the last dimension we end up with 27 equally sized cubes (symmetrical case, here: 7 cuboids.) 2 / 3,0,0 1, 1 / 3,1 0,0,0 1,1,1 1 / 3,1,1 0,0,0 1,1,1 2 / 3,0,0 1, 1 / 3,1 1, 1 / 3, 1 / 3 2 / 3,0, 2 / 3 1 / 3,1,1

JASS dUsage in numerical simulations So far we’ve only spoken about the generation of space filling curves. But how can this be of advantage for the data transfer in numerical simulations? 0,0 1,1 To answer this question let’s have a short look at a simple grid. Within a square we need the values at the corner points of that square. When we go through this square following e.g. Peano’s space filling curve we can build up inner lines of data points, which are processed in a forward or backward order. This scheme corresponds to a stack, on which we can only access the very most upper element (forward: writing to stack, backward: reading from stack). In our example we would need 2 stacks (blue and red).

JASS dExample for stack usage Let’s examine what’s happening in the squares In the first square we need the data points 1,2,5,6. Stacks after processing the first cell: stack 1: stack 2: In the second square the points 2,3,6,7 are needed. Stacks after processing the second cell: stack 1: stack 2: Now we jump to the 5 th square, points 6,7,10,11 have to be accessed. Stacks after processing the fifth cell: stack 1: stack 2:

JASS dUsage with adaptable grids To guarantee the usage of this idea for complex computations it has to hold for adaptable grids, too. We start with the first square on the coarsest grid and recursively refine where needed (remark: full inclusion). This leads to a top-down approach where we first process the cells on the deeper level before finishing the cells on the coarser level.  top-down depth-first (When entering cell 6 first we first process cell 6, then cells 7 to 15 are finished and afterwards we return to the coarse level and proceed with cell 16.)

JASS dStacks for adaptable grids Points 1 and 2 are used to calculate cells on both coarse and fine level. When working with hierarchical grid we only have the points 1a and 2a. If we would only use two stacks we would have to store the points 3 and 4 to the same stack as points 1a and 2a. When finishing fine cell 9 (fine level) point 4 would lie on top of the stack containing point 2a, cell 3 (coarse level) couldn’t access point 2a. We handle this problem by introducing two new stacks. We use 2 point stacks (0D) and 2 line stacks (1D). When working with a generating system we have different point values on all levels (points 1a,1b and 2a, 2b). 2b 3 4 1a a b

JASS dStacks in adaptable grids & generating systems As already mentioned, with generating systems we have point values on different levels. To deal with these problems the so far used number of stacks is insufficient. One can show that the number of stacks can be calculated the way shown below. We numerate the points according to their value in the Cartesian coordinate system. The connecting lines e.g. 02 are numerated the following way (element-wise): - if the corresponding point coordinates are equal: take their value - else write a 2 This leads to the following formula for the # of stacks: 3 dim dim = 2  9 stacks (4 point stacks, 4 line stacks, 1 plane stacks = in/output) dim = 3  27 stacks (8 point stacks, 12 line stacks, 6 plane stacks, 1 volume stack = in/output) 0,0 0,1 1,0 1, ,0,0 0,0,1 1,0,0 1,0,

JASS dConclusion Peano-curves allow efficient stack usage for n-dimensional problems. (Hilbert- curves show this advantage only for the 2-dimensional case, as the higher dimensional curves don’t show exploitable behaviour.) The number of needed stacks is independent of the problem size.  computation speed-up due to efficient cache usage (CPU pre-fetching unit can efficiently load the needed data when organised in an easy way.) The only drawback is the recursive splitting into 3 n subintervals when using Peano curves which lead to more points, but we gain more than we loose here.

JASS aStandard nodal basis for FEM-analysis FEM – analysis uses a set of independent piece-wise linear functions to describe the solution. We can write the function u(x) as follows: u i :function value at position I  i :hut-function which has the value one at position i, zero at the neighbouring positions (i-1, i+1) and is linear between 01 x i-1 xixi x i+1 i th hut-function 01 A set of linear independent hut- functions forms a basis of the solution space. (boundaries not treated)

JASS aResulting LSE (Linear System of Equations) We now have a short look on the one-dimensional Poisson equation using the standard discretization approach. The problem condition depends on the ration max / min. max goes towards the row sum (= 2) and min tends towards zero, therefore the resulting condition is rather poor. This causes slow convergence.

JASS aDrawbacks of standard approach Slow convergence rate as already shown due to bad eigenvalues. Moreover the condition depends on the problem size! When we have to examine a certain area in more detail, i.e. when we need locally a higher resolution we cannot reuse the already performed work. We have to calculate new hut-functions.  Use hierarchical basis to compensate these two effects.

JASS bHierarchical Basis A possibility to deal with the mentioned drawbacks is to use a hierarchical basis. Unlike with the standard approach we now use all the hut-functions on the coarser levels when refining. On finer levels hut-functions are only created at points which don’t define functions on higher levels. These hut-functions still build up a basis. One can uniquely describe the function u(x) by summing up the different hut-functions on the different levels. standard basishierarchical basis

JASS bInterpretation of hierarchical basis The picture below shows a simple interpolation of a parabola using standard basis (left side) and hierarchical basis (right side). The coefficients needed for the normal approach are the function values at the corresponding positions. (This is also the reason why one cannot locally refine and reuse older values.) For the hierarchical basis only the first coefficient equals the corresponding function value (= 1 in below example). The coefficients for the finer levels just describe the difference between the real function value and the one described by all previous hut- functions. This supports reuse when locally refining. coefficients: ¾, 1, ¾coefficients: ¼, 1, ¼

JASS bRepresentation in hierarchical basis Let’s remember the representation of the function u(x) using the nodal basis: The same function u(x) can be represented using a hierarchical basis: Both representations describe the same function. We can define a mapping transforming the normal representation into the hierarchical one:

JASS bTransformation into hierarchical basis When transforming the coefficients of the standard description into the hierarchical basis we use the following formula: u 2k+2 So we can find the entries (-1/2, 1, -1/2) in each row of the transformation matrix H. (We have to use this formula recursively to calculate the coefficients on all levels.) v 2k+1 u 2k u 2k+1 hierarchical difference (coarse – fine level)  coefficient for hierarchical basis

JASS bTransformation into hierarchical basis (cont.) Let’s examine as example the parabola defined by the points (0,0), (1,1) and (2,0). We are using 7 equally distributed inner points. coefficients for hierarchical basis (v 1,…,v 7 )’ coefficients for standard basis (u 1,…,u 7 )’ v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 01 u1u1 u2u2 u7u7 01 transformation matrix H

JASS We want to solve more effectively. To do so we replace the standard representation of u by its hierarchical one. 3.bUsage in numerical schemes When we now multiply the equation from the left by T’ we’ll end up with a new equation for v(x) (this will be a very nice one). The transformation into u(x) is then easily performed.

JASS b Transforming v(x) into u(x) Again we consider the transformation for our parabola example. coefficients for hierarchical basis (v 1,…,v 7 )’ coefficients for standard basis (u 1,…,u 7 )’ v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 01 u1u1 u2u2 u7u7 01 transformation matrix T

JASS b Calculating T’AT Again we consider the transformation at our parabola example. The resulting matrix is diagonal. After multiplying the old right-hand-side of the equation with T’ we can directly read of the result with respect to the hierarchical system v(x). Finally we only have to transform this back to the standard basis u(x).

JASS bHierarchical basis in 2 D We can transfer this idea to higher dimensional problems. One can split the dimensions differently. first refine completely one dimension before working on the other always refine both dimensions before going to a finer level Higher dimensional hut-functions are the product of one dimensional hut- functions. refine one direction Possibilities of refinement-order: refine the other direction define hut-function on coarsest level

JASS bMatrix T’AT for n-dimensional case For the one dimensional case the matrix T’AT is diagonal. Higher dimensional problems –T’AT is not diagonal any longer –T’AT is still “nicer” than A when using nodal basis better condition  faster convergence

JASS bAdvantages of hierarchical basis Possibility to locally refine the system and reuse older values. Faster convergence than the standard approach (optimal for 1-dimensional problems). (We need more sophisticated algorithms to perform the necessary matrix multiplications in an effective way.) Remark: Although hierarchical basis are always much more effective than the standard representation they can be beaten when dealing with higher dimensional problems.  This leads us to generating systems.

JASS So far we restricted ourselves to either nodal or hierarchical hut-functions forming a basis to describe the function u(x). We can also build up u(x) using the necessary linearly independent functions and additionally as much linearly dependent functions as we like to use. This representation is not any longer unique, but we don’t need uniqueness. We are only interested in describing the function u(x). Example: We get the same vector a by either using a unique representation or one of various possible representations in a generating system. 3.cGenerating system

JASS cGenerating system The normal hierarchical basis only allows refinement on nodes which aren’t already defined by hut- functions on coarser levels. A generating system refines additionally on already described nodes (red graphs). It can also be thought of as the sum of all nodal hut-functions on all levels of observation. hierarchical basis generating system

JASS cGenerating system As already mentioned the condition of a matrix depends on the ratio of max / min. To be precise: min means the smallest non-zero eigenvalue. When using a generating system we introduce lots of dependent lines in the matrix and therefore get a corresponding number of eigenvalues which equal zero. The gain lies in the smallest non-zero eigenvalue. It’s not any longer dependent on the problem size but a constant (independent of dimension). The solution with respect to the generating system is not unique. Depending on the start value for the iterative solver we get a different solution. But when transforming these solutions back into a unique representation we end up with the same value, i.e. the calculated solutions from the generating ansatz depending on different initial values can only vary within the composition of the linear dependent functions.