Presentation is loading. Please wait.

Presentation is loading. Please wait.

The application of Conformal Computing techniques to problems in computational physics: The Fast Fourier Transform James E. Raynolds, College of Nanoscale.

Similar presentations


Presentation on theme: "The application of Conformal Computing techniques to problems in computational physics: The Fast Fourier Transform James E. Raynolds, College of Nanoscale."— Presentation transcript:

1 The application of Conformal Computing techniques to problems in computational physics: The Fast Fourier Transform James E. Raynolds, College of Nanoscale Science and Engineering Lenore Mullin, College of Computing and Information University at Albany, State University of New York, Albany, NY 12309

2 Premise Most software is inefficient and error prone
Large systems require years to develop Difficult to maintain Hundreds of Millions $ lost annually (NIST Survey) Critical need for software and hardware efficiency and reliability (e.g. embedded systems) Recognized need for new discipline: NSF Science of Design

3 Solution: Math-Based Design Methodology
Current problems foreseen decades ago Solution: “A Mathematics of Arrays” and calculus (Mullin 1988) All array based computations handled with an array algebra and index calculus to produce the MOST efficient implementation Ability to avoid array valued temporaries

4 Matrix Example In Fortran 90: First temporary computed:
Second temporary: Last operation:

5 Matrix Example (cont) Intermediate temporaries consume memory and add to processing operations Solution: compose index operations Loop over i, j: No temporaries:

6 Need for formalism Few problems are as simple as
Formalism designed to handle extremely complicated situations systematically Goal: composition of algorithms For Example: Radar is composed of the composition of numerous algorithms: QR(FFT(X)). Optimizations are classically done sequentially even when parallel processors and nodes are used. FFT(or DFT?) then QR Optimizations can be optimized across algorithms, processors, and memories

7 MoA and PSI Calculus Basic Properties:
An index calculus: psi function. Shape polymorphic functions and operators: Operations are defined using shapes and psi. MoA defines some useful operations and function. As long as shapes define functions and operations any new function or operation may be defined and reduced. Fundamental type is the array: scalars are 0-dimensional arrays. Denotational Normal Form(DNF) = reduced form in Cartesian coordinates (independent of data layout: row major, column major, regular sparse, …) Operational Normal Form(ONF) = reduced form for 1-d memory layout(s). Defines How to Build the code on processor/memory hierarchies. ONF reveals loops and control.

8 Historical Background
Universal Algebra – Joseph Sylvester, late 19th Century Algebra of Arrays - APL – Ken Iverson, 1950’s Languages: Interpreters Phil Abrams: An APL Machine(1972) with Harold Stone Indexing operations on shapes, open questions, not algebraic closed system. Furthered by Hassett and Lyon, Guibas and Wyatt. Used in Fortran. Alan Perlis: Explored Abram’s optimizations in compilers for APL. Furthered by Miller, Minter, Budd. ZPL uses these ideas not full theory of arrays. Susan Gerhart: Anomalies in APL algebra, can not verify correctness. Alan Perlis with Tu: Array Calculator and Lambda Calculus mid 80’s.

9 Historical Background continued
MoA and Psi Calculus: Mullin 1988, full closure on Algebra of arrays and index calculus based on shapes. Used with Lambda Calculus Built prototype compilers: output C, F77, F90, HPF Modified Portland Groups HPF, HPF Research Partner Introduced Theory to Functional language Community Bird-Meertens, SAC, … Applied to Hardware Design and Verification Pottinger(ASICS), IBM(Patent, Sparse Arrays), Sevaria(Hierarchical Bus Parallel Machines) Introduce Theory to OO Community Expression Templates and C++, Optimizations for Scientific Libraries, Matlab to VHDL optimizations(Sabbatical MIT Lincoln Laboratory).

10 Levels of Processor/Memory Hierarchy
Applications Levels of Processor/Memory Hierarchy Can be Modeled by Increasing Dimensionality of Data Array. Additional dimension for each level of the hierarchy. Envision data as reshaped/transposed to reflect mapping to increased dimensionality. An Index Calculus automatically transforms algorithm to reflect restructured data array. Data, layout, data movement, and scalarization automatically generated based on MoA descriptions and Psi Calculus Definitions of Array Operations, Functions and their compositions. Arrays are any dimension, even 0, I.e. scalars

11 Processor/Memory Hierarchy continued
intricate math Math and indexing operations in same expression Framework for design space search Rigorous and provably correct Extensible to complex architectures Mathematics of Arrays y= conv (x) intricate memory accesses (indexing) Map Approach Example: “raising” array dimensionality x: < … > < > < > P0 Main Memory < > < > L2 Cache < > L1 Cache < > P1 Memory Hierarchy Map: < > < > P0 P1 P2 < > < > Parallelism P2 < > < >

12 Application Domain Signal Processing
3-d Radar Data Processing Composition of Monolithic Array Operations Convolution Matrix Multiply Pulse Compression Doppler Filtering Beamforming Detection Change algorithm to better match hardware/memory/ communication. Lift dimension algebraically Hardware Info: - Memory - Processor Algorithm is Input Architectural Information is Input Algebraically adding processors becomes 4-d, time 5-d, cache 6-d, Block-Cyclic decomposition requires restructuring

13 Manipulation of an array
Given a 3 by 5 by 4 array: Shape vector: Index vector: Used to select:

14 Other Definitions Ravel: to flatten an array. Given B… Now re-shape:

15 More Definitions Reverse: Given an array
The reversal is given through indexing Examples:

16 Some Psi Calculus Operations Built Using y & Shapes
Arguments Definition take Vector A, int N Forms a Vector of the first N elements of A drop Vector A, int N Forms a Vector of the last (A.size-N) elements of A Forms a Vector of the last N elements of A concatenated to the other elements of A rotate Vector A, int N cat Vector A, Vector B Forms a Vector that is the concatenation of A and B Operation Op, dimension D, Array A Applies unary operator Op to D-dimensional components of A (like a for all loop) unaryOmega Operation Op, Dimension Adim. Array A, Dimension Bdim, Array B Applies binary operator Op to Adim-dimensional components of A and Bdim-dimensional components of B (like a for all loop) binaryOmega Reshapes B into an array having A.size dimensions, where the length in each dimension is given by the corresponding element of A reshape Vector A, Vector B iota int N Forms a vector of size N, containing values N-1 = index permutation = operators = restructuring = index generation

17 ONF has minimum number of reads/writes
Psi Reduction A=cat(rev(B), rev(C))  A[i]=B[size(B)-1-i] if 0≤ i < size(B) A[i]=C[(size(C)+size(B))-1-i] if size(B) ≤ i <size(B)+size(C) ONF has minimum number of reads/writes PSI Calculus rules applied mechanically to produce ONF which is easily translated to optimal loop implementation

18 Psi Reduction: 2-d example DNF to ONF
Assume A, B, and C are n by n arrays. Let n=4. C=(shift(A,1)+shift(B,2))T  I. Get shape: shape((shift(A,1)+shift(B,2))T) = reverse(shape(shift(A,1)+shift(B,2)))= reverse(shape(shift(A,1))= reverse(shape(A)=reverse(n,n)=(n,n) II Psi Reduce: for all I,j s.t <= I,j < (n,n) <I,j> psi (shift(A,1)+shift(B,2))T = <j,I> psi (shift(A,1)+shift(B,2)) = (<j,I> psi shift(A,1)) (<j,I> psi shift(B,2)) = (<I> psi (<j+1> psi A)) + (<I> psi (<j+2) psi B)) = DNF Avec[(gamma(j+1,n)*n)+iota(n)][I] + Bvec[(gamma(j+2,n)*n)+iota(n)][I] = Avec[(gamma(j+1,n)*n)+I] +Bvec[(gamma(j+2,n)*n)+I]= Avec[((j+1)*n)+I] + Bvec[((j+2)*n)+I] = ONF Once we have the ONF we know how to build the code. Substituting when j=0 and I=0 above Avec[(1*4)+0] + Bvec[(2*4)+0] = Avec[4]+Bvec[8] A= B=

19 New FFT algorithm: record speed
Maximize in-cache operations through use of repeated transpose-reshape operations Similar to partitioning for parallel implementation Do as many operations in cache as possible Re-materialize the array to achieve locality Continue processing in cache and repeat process

20 Example Assume cache size c = 4; input vector length n = 32; number of rows r = n/c = 8 Generate vector of indices: Use re-shape operator r to generate a matrix

21 Starting Matrix Each row is of length equal to the size “c”
Standard butterfly applied to each row as...

22 Next transpose To continue further would induce cache misses so transpose and reshape. Transpose-reshape operation composed over indices (only result is materialized. The transpose is:

23 Resulting Transpose-Reshape
Materialize the transpose-reshaped array B Carry out butterfly operation on each row Weights are re-ordered Access patterns are standard...

24 Transpose-Reshape again
As before: to proceed further would induce cache misses so: Do the transpose-reshape again (composing indices) The transpose is:

25 Last step (in this example)
Materialize the composed transpose-reshaped array C Carry out the last step of the FFT This last step corresponds to cycles of length 2 involving elements 0 and 16, 1 and 17, etc. 1

26 Final Transpose Data has been permuted numerous times
Multiple reshape-transposes We could reverse the transformations There would be multiple steps, multiple writes. Viewing the problem as an n-cube(hypercube for radix 2) allows us to use the number of reshape-transposes as an argument to rotate(or shift) of a vector generated from the dimension of the hypercube. This rotated vector is used as an argument to binary transpose. Permutes everything at once. Express Algebraically, Psi reduce to DNF then ONF for a generic design. ONF has only two loops no matter what dimension hypercube(or n-cube for radix = n) we start with.

27

28

29 Summary All operations have been carried out in cache at the price of re-arranging the data Data blocks can be of any size (powers of the radix): need not equal the cache size Optimum performance: tradeoff between reduction of cache misses and cost of transpose-reshape operations Number of transpose-reshape operations determined by the data block size (cache size) Record performance: up to factor of 4 better than libraries

30 Benefits of Using MOA and Psi Calculus
Processor/Memory Hierarchy can be modeled by reshaping data using an extra dimension for each level. Composition of monolithic operations can be reexpressed as composition of operations on smaller data granularities Matches memory hierarchy levels Avoids materialization of intermediate arrays. Algorithm can be automatically(algebraically) transformed to reflect array reshapings above. Facilitates programming expressed at a high level Facilitates intentional program design and analysis Facilitates portability This approach is applicable to array-based computation in general.

31 Current Research Using MoA and Psi Calculus to Describe Abstract Processor/Memory Hierarchies and Instruction Sets. Working with Library Developers to Mechanize some or all of the MoA algebra with the Psi Calculus. Andrew Lumsdaine, Matrix Template Library, etc. Bob Bond, Expression Templates to Optimize Scientific Libraries, explore connection to Matlab to VHDL. Developing Designs for Hardware Indentify how to easily Program and Optimize FPGAs in conjunction with new machines and their cooperative OSs, e.g. Cray XD1. FFT presently is radix n, m-dimensional, supports a cache loop, and is across multiple processors both shared and distributed. QR, TD Convolution, LU, Matrix Multiple(inner product), Kronecker Product(outer product) have been previously designed and include processor/cache loops.

32 Current Research continued
Identifying bottlenecks in first-principles scientific simulations: FFT (PDE solvers have similar memory accesses). Such problems are the large grid problems that require deterministic resource allocation. High-speed and huge data sizes are essential. Real-time is the goal. Scientific Simulations include: Weather, Materials Deposition, Earthquake prediction, Oil Location, Ground Water Contamination, etc. All require the same algorithms. Monograph documenting theory and applications. DSP Journal, comprehensive introduction to both.(to appear) Continue to teach and assist those who may benefit.


Download ppt "The application of Conformal Computing techniques to problems in computational physics: The Fast Fourier Transform James E. Raynolds, College of Nanoscale."

Similar presentations


Ads by Google