Parallel Programming in C with MPI and OpenMP Michael J. Quinn.

Slides:



Advertisements
Similar presentations
Section 13-4: Matrix Multiplication
Advertisements

Parallel Fast Fourier Transform Ryan Liu. Introduction The Discrete Fourier Transform could be applied in science and engineering. Examples: ◦ Voice recognition.
Dense Matrix Algorithms. Topic Overview Matrix-Vector Multiplication Matrix-Matrix Multiplication Solving a System of Linear Equations.
Sahalu Junaidu ICS 573: High Performance Computing 8.1 Topic Overview Matrix-Matrix Multiplication Block Matrix Operations A Simple Parallel Matrix-Matrix.
Parallel Matrix Operations using MPI CPS 5401 Fall 2014 Shirley Moore, Instructor November 3,
CS 484. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Numerical Algorithms Matrix multiplication
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 11: Numerical Algorithms Sec 11.2: Implementing.
Maths for Computer Graphics
CISC October Goals for today: Foster’s parallel algorithm design –Partitioning –Task dependency graph Granularity Concurrency Collective communication.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1 Friday, October 20, 2006 “Work expands to fill the time available for its completion.” -Parkinson’s 1st Law.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 8 Matrix-vector Multiplication.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Design of parallel algorithms
CS 584. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1/26 Design of parallel algorithms Linear equations Jari Porras.
High Performance Fortran (HPF) Source: Chapter 7 of "Designing and building parallel programs“ (Ian Foster, 1995)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Design of parallel algorithms Matrix operations J. Porras.
Dense Matrix Algorithms CS 524 – High-Performance Computing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
Chapter 13 Finite Difference Methods: Outline Solving ordinary and partial differential equations Finite difference methods (FDM) vs Finite Element Methods.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Row 1 Row 2 Row 3 Row m Column 1Column 2Column 3 Column 4.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Parallel Programming in C with MPI and OpenMP
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Sieve of Eratosthenes by Fola Olagbemi. Outline What is the sieve of Eratosthenes? Algorithm used Parallelizing the algorithm Data decomposition options.
High Performance Fortran (HPF) Source: Chapter 7 of "Designing and building parallel programs“ (Ian Foster, 1995)
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
MATRIX MULTIPLICATION 4 th week. -2- Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM MATRIX MULTIPLICATION 4 th week References Sequential matrix.
Lecture 9 Architecture Independent (MPI) Algorithm Design
Section 4.3 – Multiplying Matrices. MATRIX MULTIPLICATION 1. The order makes a difference…AB is different from BA. 2. The number of columns in first matrix.
PARALLEL COMPUTATION FOR MATRIX MULTIPLICATION Presented By:Dima Ayash Kelwin Payares Tala Najem.
Notes Over 4.2 Finding the Product of Two Matrices Find the product. If it is not defined, state the reason. To multiply matrices, the number of columns.
DEPENDENCE-DRIVEN LOOP MANIPULATION Based on notes by David Padua University of Illinois at Urbana-Champaign 1.
Ch. 12 Vocabulary 1.) matrix 2.) element 3.) scalar 4.) scalar multiplication.
All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya.
Numerical Algorithms Chapter 11.
Properties and Applications of Matrices
12-1 Organizing Data Using Matrices
High Altitude Low Opening?
Matrix Operations.
Parallel Programming By J. H. Wang May 2, 2017.
Introduction To Matrices
Matrix Operations.
Parallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP
Parallel Programming with MPI and OpenMP
Multiplying Matrices.
CSCE569 Parallel Computing
Parallel Matrix Operations
Numerical Algorithms • Parallelizing matrix multiplication
CSCE569 Parallel Computing
Parallel Programming in C with MPI and OpenMP
CSCE569 Parallel Computing
Multiplying Matrices.
Matrix Addition and Multiplication
To accompany the text “Introduction to Parallel Computing”,
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

Parallel Programming in C with MPI and OpenMP Michael J. Quinn

Chapter 11 Matrix Multiplication

Outline Sequential algorithms Sequential algorithms  Iterative, row-oriented  Recursive, block-oriented Parallel algorithms Parallel algorithms  Rowwise block striped decomposition  Cannon’s algorithm

 = Iterative, Row-oriented Algorithm Series of inner product (dot product) operations

Performance as n Increases

Reason: Matrix B Gets Too Big for Cache Computing a row of C requires accessing every element of B

Block Matrix Multiplication  = Replace scalar multiplication with matrix multiplication Replace scalar addition with matrix addition

Recurse Until B Small Enough

Comparing Sequential Performance

First Parallel Algorithm Partitioning Partitioning  Divide matrices into rows  Each primitive task has corresponding rows of three matrices Communication Communication  Each task must eventually see every row of B  Organize tasks into a ring

First Parallel Algorithm (cont.) Agglomeration and mapping Agglomeration and mapping  Fixed number of tasks, each requiring same amount of computation  Regular communication among tasks  Strategy: Assign each process a contiguous group of rows

Communication of B A ABC A ABC A ABC A ABC

A ABC A ABC A ABC A ABC

A ABC A ABC A ABC A ABC

A ABC A ABC A ABC A ABC

Complexity Analysis Algorithm has p iterations Algorithm has p iterations During each iteration a process multiplies (n)  (n / p) block of A by (n / p)  n block of B:  (n 3 / p 2 ) During each iteration a process multiplies (n)  (n / p) block of A by (n / p)  n block of B:  (n 3 / p 2 ) Total computation time:  (n 3 / p) Total computation time:  (n 3 / p) Each process ends up passing (p-1)n 2 /p =  (n 2 ) elements of B Each process ends up passing (p-1)n 2 /p =  (n 2 ) elements of B

Isoefficiency Analysis Sequential algorithm:  (n 3 ) Sequential algorithm:  (n 3 ) Parallel overhead:  (pn 2 ) Isoefficiency relation: n 3  Cpn 2  n  Cp Parallel overhead:  (pn 2 ) Isoefficiency relation: n 3  Cpn 2  n  Cp This system does not have good scalability This system does not have good scalability

Weakness of Algorithm 1 Blocks of B being manipulated have p times more columns than rows Blocks of B being manipulated have p times more columns than rows Each process must access every element of matrix B Each process must access every element of matrix B Ratio of computations per communication is poor: only 2n / p Ratio of computations per communication is poor: only 2n / p

Parallel Algorithm 2 (Cannon’s Algorithm) Associate a primitive task with each matrix element Associate a primitive task with each matrix element Agglomerate tasks responsible for a square (or nearly square) block of C Agglomerate tasks responsible for a square (or nearly square) block of C Computation-to-communication ratio rises to n /  p Computation-to-communication ratio rises to n /  p

Elements of A and B Needed to Compute a Process’s Portion of C Algorithm 1 Cannon’s Algorithm

Blocks Must Be Aligned BeforeAfter

Blocks Need to Be Aligned A00 B00 A01 B01 A02 B02 A03 B03 A10 B10 A11 B11 A12 B12 A13 B13 A20 B20 A21 B21 A22 B22 A23 B23 A30 B30 A31 B31 A32 B32 A33 B33 Each triangle represents a matrix block Only same-color triangles should be multiplied

Rearrange Blocks A00 B00 A01 B01 A02 B02 A03 B03 A10 B10 A11 B11 A12 B12 A13 B13 A20 B20 A21 B21 A22 B22 A23 B23 A30 B30 A31 B31 A32 B32 A33 B33 Block Aij cycles left i positions Block Bij cycles up j positions

Consider Process P 1,2 B02 A10A11A12 B12 A13 B22 B32 Step 1

Consider Process P 1,2 B12 A11A12A13 B22 A10 B32 B02 Step 2

Consider Process P 1,2 B22 A12A13A10 B32 A11 B02 B12 Step 3

Consider Process P 1,2 B32 A13A10A11 B02 A12 B12 B22 Step 4

Complexity Analysis Algorithm has  p iterations Algorithm has  p iterations During each iteration process multiplies two (n /  p )  (n /  p ) matrices:  (n 3 / p 3/2 ) During each iteration process multiplies two (n /  p )  (n /  p ) matrices:  (n 3 / p 3/2 ) Computational complexity:  (n 3 / p) Computational complexity:  (n 3 / p) During each iteration process sends and receives two blocks of size (n /  p )  (n /  p ) During each iteration process sends and receives two blocks of size (n /  p )  (n /  p ) Communication complexity:  (n 2 /  p) Communication complexity:  (n 2 /  p)

Isoefficiency Analysis Sequential algorithm:  (n 3 ) Sequential algorithm:  (n 3 ) Parallel overhead:  (  pn 2 ) Parallel overhead:  (  pn 2 ) Isoefficiency relation: n 3  C  pn 2  n  C  p This system is highly scalable This system is highly scalable

Summary Considered two sequential algorithms Considered two sequential algorithms  Iterative, row-oriented algorithm  Recursive, block-oriented algorithm  Second has better cache hit rate as n increases Developed two parallel algorithms Developed two parallel algorithms  First based on rowwise block striped decomposition  Second based on checkerboard block decomposition  Second algorithm is scalable, while first is not