High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.

Slides:



Advertisements
Similar presentations
1 Parallel Scientific Computing: Algorithms and Tools Lecture #2 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Advertisements

CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
CS 140 : Matrix multiplication Linear algebra problems Matrix multiplication I : cache issues Matrix multiplication II: parallel issues Thanks to Jim Demmel.
CS 240A : Matrix multiplication Matrix multiplication I : parallel issues Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick.
Introduction CS 524 – High-Performance Computing.
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Data Locality CS 524 – High-Performance Computing.
01/19/2006CS267 Lecure 21 High Performance Programming on a Single Processor: Memory Hierarchies Matrix Multiplication Automatic Performance Tuning James.
Memory System Performance October 29, 1998 Topics Impact of cache parameters Impact of memory reference patterns –matrix multiply –transpose –memory mountain.
Computer ArchitectureFall 2008 © October 27th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
08/29/2002CS267 Lecure 21 CS 267: Optimizing for Uniprocessors—A Case Study in Matrix Multiplication Katherine Yelick
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS267 L2 Memory Hierarchies.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 2: Memory Hierarchies and Optimizing Matrix Multiplication.
9/10/2007CS194 Lecture1 Memory Hierarchies and Optimizations: Case Study in Matrix Multiplication Kathy Yelick
Uniprocessor Optimizations and Matrix Multiplication
Data Locality CS 524 – High-Performance Computing.
Single Processor Optimizations Matrix Multiplication Case Study
1/19/2007CS267 Lecture 21 Single Processor Machines: Memory Hierarchies and Processor Features Kathy Yelick
CES 524 May 6 Eleven Advanced Cache Optimizations (Ch 5) parallel architectures (Ch 4) Slides adapted from Patterson, UC Berkeley.
Optimizing for the serial processors Scaled speedup: operate near the memory boundary. Memory systems on modern processors are complicated. The performance.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Systems I Locality and Caching
ECE Dept., University of Toronto
High Performance Computing 1 Numerical Linear Algebra An Introduction.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
1 Lecture 2 Single Processor Machines: Memory Hierarchies and Processor Features UCSB CS240A, Winter 2013 Modified from Demmel/Yelick’s slides.
CS 140 : Matrix multiplication Warmup: Matrix times vector: communication volume Matrix multiplication I: parallel issues Matrix multiplication II: cache.
1 Single Processor Machines: Memory Hierarchies and Processor Features Case Study: Tuning Matrix Multiply Based on slides by James Demmel
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
Chapter 8 CPU and Memory: Design, Implementation, and Enhancement The Architecture of Computer Hardware and Systems Software: An Information Technology.
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
Introduction to Computer Systems Topics: Theme Five great realities of computer systems (continued) “The class that bytes”
ECE 454 Computer Systems Programming Memory performance (Part II: Optimizing for caches) Ding Yuan ECE Dept., University of Toronto
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
SNU IDB Lab. Ch4. Performance Measurement © copyright 2006 SNU IDB Lab.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Lecture 6: Memory Hierarchy and Cache (Continued) Jack Dongarra University of Tennessee and Oak Ridge National Laboratory Cache: A safe place for hiding.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Memory-Aware Compilation Philip Sweany 10/20/2011.
What is it and why do we need it? Chris Ward CS147 10/16/2008.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
CSE373: Data Structures & Algorithms Lecture 26: Memory Hierarchy and Locality 1 Kevin Quinn, Fall 2015.
01/26/2009CS267 - Lecture 2 1 Experimental Study of Memory (Membench)‏ Microbenchmark for memory system performance time the following loop (repeat many.
1 Writing Cache Friendly Code Make the common case go fast  Focus on the inner loops of the core functions Minimize the misses in the inner loops  Repeated.
CPE 779 Parallel Computing - Spring Memory Hierarchies & Optimizations Case Study: Tuning Matrix Multiply Based on slides by: Kathy Yelick
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
CMSC 611: Advanced Computer Architecture
Cache Memories.
The Goal: illusion of large, fast, cheap memory
Cache Memories CSE 238/2038/2138: Systems Programming
How will execution time grow with SIZE?
Optimizing Cache Performance in Matrix Multiplication
Cache Memory Presentation I
CS 105 Tour of the Black Holes of Computing
Optimizing Cache Performance in Matrix Multiplication
BLAS: behind the scenes
Uniprocessor Optimizations and Matrix Multiplication
High Performance Programming on a Single Processor: Memory Hierarchies Matrix Multiplication Automatic Performance Tuning James Demmel
Memory Hierarchies.
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
Memory System Performance Chapter 3
CS 140 : Matrix multiplication
Presentation transcript:

High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division

High Performance Parallel Programming Assignment 1: What a load of person answered the question 4 people gave me a load of b......

High Performance Parallel Programming Assignment 1: The question Build a web page briefly describing the application, the use of parallelism, and a frank assesment of its success, weakness and challenges. I didn’t want a copy of somebodies web page describing their application I wanted you to give me your opinion!!!!

High Performance Parallel Programming Assignment 1: The results Rao’s suggestion –give you the choice - I’ll mark it and I’ll mark the second assignment and we’ll give you the best combination. –(Rao’s kind - he’d pass you all) My choice –fail most of you (I’ll give you 4 as a goodwill gesture) –I can’t do it - but 7 is the most anyone will get.. Don’t do it again.....

High Performance Parallel Programming Lecture 10:Matrix Multiplication

High Performance Parallel Programming First, performance.. look at a single processor then think about parallel This lecture will look at everything in the context of matrix multiplies lessons learned will apply to other situations

High Performance Parallel Programming Idealized Uniprocessor Model Processor can name objects in a simple, flat address space –these represent integers, floats, pointers, structures, arrays, etc. –exist in the program stack, static region, or heap Operations include –read and write from memory (given an address/pointer) –arithmetic and other logical operations Order specified by program –read returns the most recently written data –compiler and architecture may reorder operations to optimize performance, as long as the programmer cannot see any reordering Performance –each operation has roughly the same cost (read, write, multiply, etc.)

High Performance Parallel Programming Uniprocessor Reality Modern processors use many techniques for performance –caches small amount of fast memory where values are “cached” in hope of reusing recently used or nearby data different memory ops can have very different costs –parallelism superscalar processors have multiple “functional units” that can run in parallel different orders, instruction mixes have different costs –pipelining a form of parallelism, like an assembly line in a factory –Why is this your problem? In theory, compilers understand all of this and can optimize your program; in practice they don’t.

High Performance Parallel Programming Matrix-multiply, optimized several ways Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

High Performance Parallel Programming Memory Hierarchy Most programs have a high degree of locality in their accesses –spatial locality: accessing things nearby previous accesses –temporal locality: reusing an item that was previously accessed Memory Hierarchy tries to exploit locality on-chip cache registers datapath control processor Second level cache (SRAM) Main memory (DRAM) Secondary storage (Disk) Tertiary storage (Tape) Speed (ns): 1s 10s 100s 10s ms 10s sec Size (bytes): 100s Ks Ms Gs Ts

High Performance Parallel Programming Reminder - Cache Basics X000 X001 X010 X011 X100 X101 X110 X111 Cache hit: a memory access that is found in the cache -- cheap Cache miss: a memory access that is not in the cache - expensive, because we need to get the data from elsewhere Consider a tiny cache (for illustration only) Cache line length: number of bytes loaded together in one entry Direct mapped: only one address (line) in a given range in cache Associative: 2 or more lines with different addresses exist lineoffsettag Address

High Performance Parallel Programming Experimental Study of Memory Microbenchmark for memory system performance time the following program for each size(A) and stride s (repeat to obtain confidence and mitigate timer resolution) for array A of size from 4KB to 8MB by 2x for stride s from 8 Bytes (1 word) to size(A)/2 by 2x for i from 0 to size by s load A[i] from memory (8 Bytes)

High Performance Parallel Programming Observing a Memory Hierarchy L2: 512 K, 52 ns (8 cycles) L1: 8K, 6.7 ns (1 cycle) Mem: 300 ns (45 cycles) Dec Alpha, 21064, 150 MHz clock 32 byte cache line8 K pages See for details

High Performance Parallel Programming Lessons The actual performance of a simple program can be a complicated function of the architecture Slight changes in the architecture or program change the performance significantly Since we want to write fast programs, we must take the architecture into account, even on uniprocessors Since the actual performance is so complicated, we need simple models to help us design efficient algorithms We will illustrate with a common technique for improving cache performance, called blocking

High Performance Parallel Programming Optimizing Matrix Addition for Caches Dimension A(n,n), B(n,n), C(n,n) A, B, C stored by column (as in Fortran) Algorithm 1: –for i=1:n, for j=1:n, A(i,j) = B(i,j) + C(i,j) Algorithm 2: –for j=1:n, for i=1:n, A(i,j) = B(i,j) + C(i,j) What is “memory access pattern” for Algs 1 and 2? Which is faster? What if A, B, C stored by row (as in C)?

High Performance Parallel Programming Using a Simple Model of Memory to Optimize Assume just 2 levels in the hierarchy, fast and slow All data initially in slow memory –m = number of memory elements (words) moved between fast and slow memory –tm = time per slow memory operation –f = number of arithmetic operations –tf = time per arithmetic operation < tm –q = f/m average number of flops per slow element access Minimum possible Time = f*tf, when all data in fast memory Actual Time = f*tf + m*tm = f*tf*(1 + (tm/tf)*(1/q)) Larger q means Time closer to minimum f*tf

High Performance Parallel Programming Simple example using memory model s = 0 for i = 1, n s = s + h(X[i]) Assume tf=1 Mflop/s on fast memory Assume moving data is tm = 10 Assume h takes q flops Assume array X is in slow memory To see results of changing q, consider simple computation So m = n and f = q*n Time = read X + compute = 10*n + q*n Mflop/s = f/t = q/(10 + q) As q increases, this approaches the “peak” speed of 1 Mflop/s

High Performance Parallel Programming Simple Example (continued) Algorithm 1: s1 = 0; s2 = 0 for j = 1 to n s1 = s1+h1(X(j)) s2 = s2 + h2(X(j)) ° Algorithm 2: s1 = 0; s2 = 0 for j = 1 to n s1 = s1 + h1(X(j)) for j = 1 to n s2 = s2 + h2(X(j)) ° Which is faster?

High Performance Parallel Programming Optimizing Matrix Multiply for Caches Several techniques for making this faster on modern processors –heavily studied Some optimizations done automatically by compiler, but can do much better In general, you should use optimized libraries (often supplied by vendor) for this and other very common linear algebra operations –BLAS = Basic Linear Algebra Subroutines Other algorithms you may want are not going to be supplied by vendor, so need to know these techniques

High Performance Parallel Programming Matrix-vector multiplication y = y + A*x for i = 1:n for j = 1:n y(i) = y(i) + A(i,j)*x(j) = + * y(i) A(i,:) x(:)

High Performance Parallel Programming Matrix-vector multiplication y = y + A*x {read x(1:n) into fast memory} {read y(1:n) into fast memory} for i = 1:n {read row i of A into fast memory} for j = 1:n y(i) = y(i) + A(i,j)*x(j) {write y(1:n) back to slow memory} ° m = number of slow memory refs = 3*n + n^2 ° f = number of arithmetic operations = 2*n^2 ° q = f/m ~= 2 ° Matrix-vector multiplication limited by slow memory speed

High Performance Parallel Programming Matrix Multiply C=C+A*B for i = 1 to n for j = 1 to n for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) =+* C(i,j) A(i,:) B(:,j)

High Performance Parallel Programming Matrix Multiply C=C+A*B (unblocked, or untiled) for i = 1 to n {read row i of A into fast memory} for j = 1 to n {read C(i,j) into fast memory} {read column j of B into fast memory} for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j) {write C(i,j) back to slow memory} =+* C(i,j) A(i,:) B(:,j)

High Performance Parallel Programming Matrix Multiply (unblocked, or untiled) Number of slow memory references on unblocked matrix multiply m = n^3 read each column of B n times + n^2 read each column of A once for each i + 2*n^2 read and write each element of C once = n^3 + 3*n^2 So q = f/m = (2*n^3)/(n^3 + 3*n^2) ~= 2 for large n, no improvement over matrix-vector mult =+* C(i,j) A(i,:) B(:,j)

High Performance Parallel Programming Matrix Multiply (blocked, or tiled) Consider A,B,C to be N by N matrices of b by b subblocks where b=n/N is called the blocksize for i = 1 to N for j = 1 to N {read block C(i,j) into fast memory} for k = 1 to N {read block A(i,k) into fast memory} {read block B(k,j) into fast memory} C(i,j) = C(i,j) + A(i,k) * B(k,j) {do a matrix multiply on blocks} {write block C(i,j) back to slow memory} =+* C(i,j) A(i,k) B(k,j)

High Performance Parallel Programming Matrix Multiply (blocked or tiled) Why is this algorithm correct? Number of slow memory references on blocked matrix multiply m = N*n^2 read each block of B N^3 times (N^3 * n/N * n/N) + N*n^2 read each block of A N^3 times + 2*n^2 read and write each block of C once = (2*N + 2)*n^2 So q = f/m = 2*n^3 / ((2*N + 2)*n^2) ~= n/N = b for large n

High Performance Parallel Programming Matrix Multiply (blocked or tiled) So we can improve performance by increasing the blocksize b Can be much faster than matrix-vector multiplty (q=2) Limit: All three blocks from A,B,C must fit in fast memory (cache), so we cannot make these blocks arbitrarily large: 3*b^2 <= M, so q ~= b <= sqrt(M/3) Theorem (Hong, Kung, 1981): Any reorganization of this algorithm (that uses only associativity) is limited to q =O(sqrt(M))

High Performance Parallel Programming BLAS (Basic Linear Algebra Subroutines) Industry standard interface(evolving) Vendors, others supply optimized implementations History –BLAS1 (1970s): vector operations: dot product, saxpy (y=a*x+y), etc m=2*n, f=2*n, q ~1 or less –BLAS2 (mid 1980s) matrix-vector operations: matrix vector multiply, etc m=n^2, f=2*n^2, q~2, less overhead somewhat faster than BLAS1 –BLAS3 (late 1980s) matrix-matrix operations: matrix matrix multiply, etc m >= 4n^2, f=O(n^3), so q can possibly be as large as n, so BLAS3 is potentially much faster than BLAS2 Good algorithms used BLAS3 when possible (LAPACK)

High Performance Parallel Programming BLAS speeds on an IBM RS6000/590 BLAS 3 BLAS 2 BLAS 1 BLAS 3 (n-by-n matrix matrix multiply) vs BLAS 2 (n-by-n matrix vector multiply) vs BLAS 1 (saxpy of n vectors) Peak speed = 266 Mflops Peak

High Performance Parallel Programming Optimizing in practice Tiling for registers –loop unrolling, use of named “register” variables Tiling for multiple levels of cache Exploiting fine-grained parallelism within the processor –super scalar –pipelining Complicated compiler interactions Hard to do by hand (but you’ll try) Automatic optimization an active research area –PHIPAC: – –ATLAS:

High Performance Parallel Programming PHiPAC: Portable High Performance ANSI C Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

High Performance Parallel Programming Strassen’s Matrix Multiply The traditional algorithm (with or without tiling) has O(n^3) flops Strassen discovered an algorithm with asymptotically lower flops –O(n^2.81) Consider a 2x2 matrix multiply, normally 8 multiplies Let M = [m11 m12] = [a11 a12] * [b11 b12] [m21 m22] [a21 a22] [b21 b22] Let p1 = (a12 - a22) * (b21 + b22) p5 = a11 * (b12 - b22) p2 = (a11 + a22) * (b11 + b22) p6 = a22 * (b21 - b11) p3 = (a11 - a21) * (b11 + b12) p7 = (a21 + a22) * b11 p4 = (a11 + a12) * b22 Then m11 = p1 + p2 - p4 + p6 m12 = p4 + p5 m21 = p6 + p7 m22 = p2 - p3 + p5 - p7

High Performance Parallel Programming Strassen (continued) T(n)= cost of multiplying nxn matrices = 7*T(n/2) + 18(n/2)^2 = O(n^log 2 7) = O(n^2.81) ° Available in several libraries ° Up to several time faster if n large enough (100s) ° Needs more memory than standard algorithm ° Can be less accurate because of roundoff error ° Current world’s record is O(n^ )

High Performance Parallel Programming Locality in Other Algorithms The performance of any algorithm is limited by q In matrix multiply, we increase q by changing computation order –increased temporal locality For other algorithms and data structures, even hand- transformations are still an open problem –sparse matrices (reordering, blocking) –trees (B-Trees are for the disk level of the hierarchy) –linked lists (some work done here)

High Performance Parallel Programming Summary Performance programming on uniprocessors requires –understanding of memory system levels, costs, sizes –understanding of fine-grained parallelism in processor to produce good instruction mix Blocking (tiling) is a basic approach that can be applied to many matrix algorithms Applies to uniprocessors and parallel processors –The technique works for any architecture, but choosing the blocksize b and other details depends on the architecture Similar techniques are possible on other data structures You will get to try this in Assignment 2

High Performance Parallel Programming Next week: Parallel algorithms