Assignment Solving System of Linear Equations Using MPI Phạm Trần Vũ.

Slides:



Advertisements
Similar presentations
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Advertisements

Lecture 19: Parallel Algorithms
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
CSCI-455/552 Introduction to High Performance Computing Lecture 25.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Scientific Computing Linear Systems – Gaussian Elimination.
1 Parallel Algorithms II Topics: matrix and graph algorithms.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Simultaneous Linear Equations
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
Lecture 6 Matrix Operations and Gaussian Elimination for Solving Linear Systems Shang-Hua Teng.
Part 3 Chapter 9 Gauss Elimination
Major: All Engineering Majors Author(s): Autar Kaw
Numerical Algorithms • Matrix multiplication
1/26 Design of parallel algorithms Linear equations Jari Porras.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Row Reduction and Echelon Forms (9/9/05) A matrix is in echelon form if: All nonzero rows are above any all-zero rows. Each leading entry of a row is in.
Major: All Engineering Majors Author(s): Autar Kaw
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
System of Linear Equations Nattee Niparnan. LINEAR EQUATIONS.
Square n-by-n Matrix.
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
MATH 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante 2 nd Sem AY IMSP, UPLB.
CS 584 l Assignment. Systems of Linear Equations l A linear equation in n variables has the form l A set of linear equations is called a system. l A solution.
Yasser F. O. Mohammad Assiut University Egypt. Previously in NM Introduction to NM Solving single equation System of Linear Equations Vectors and Matrices.
ΑΡΙΘΜΗΤΙΚΕΣ ΜΕΘΟΔΟΙ ΜΟΝΤΕΛΟΠΟΙΗΣΗΣ 4. Αριθμητική Επίλυση Συστημάτων Γραμμικών Εξισώσεων Gaussian elimination Gauss - Jordan 1.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Chemical Engineering Majors Author(s): Autar Kaw
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Gaussian Elimination Electrical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley A set of equations is called a system of equations. The solution.
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Gaussian Elimination Industrial Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Gaussian Elimination Mechanical Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for.
Part 3 Chapter 12 Iterative Methods
Part 3 Chapter 9 Gauss Elimination PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Gaussian Elimination Civil Engineering Majors Author(s): Autar Kaw Transforming Numerical Methods Education for STEM.
2 2.5 © 2016 Pearson Education, Ltd. Matrix Algebra MATRIX FACTORIZATIONS.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Algorithms Chapter 11.
Part 3 Chapter 9 Gauss Elimination
PIVOTING The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex.
Spring Dr. Jehad Al Dallal
Gaussian Elimination.
Linear Equations.
Lecture 22: Parallel Algorithms
Decomposition Data Decomposition Functional Decomposition
Numerical Algorithms • Parallelizing matrix multiplication
CSCE569 Parallel Computing
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012 slides5.ppt Oct 24, 2013.
Major: All Engineering Majors Author(s): Autar Kaw
Linear Systems Numerical Methods.
Lecture 13 Simultaneous Linear Equations – Gaussian Elimination (2) Partial Pivoting Dr .Qi Ying.
Presentation transcript:

Assignment Solving System of Linear Equations Using MPI Phạm Trần Vũ

-2- Assignment (1)  Develop an MPI program to solve system of linear equations using MPI  Requirements: –The program must be able to solve various systems with different numbers of variables –Parallelization strategy must be able to run on different numbers of processors  Due date: 31 May 2010

-3- Assignment 1 (2)  Submission: –Report on:  parallelization strategy used in program  Theoretical speed up of the strategy used in program (ignore the cost of message passing)  Practical speed up measured by experiments on the MPI program and the sequential version  Calculation of theoretical and practical efficiency –Source code of the program –Demonstration of the program in the lab

-4- System of Linear Equations  A general linear system of m equations and n unknown variables  Usually expressed as Ax = b, where  We are interested in systems with n equations and n unknown variables (m=n)

-5- Solving Systems of Linear Equations  Solution of a linear system is a assignment of values to variables x 1, x 2, …, x n that satisfies the system  Two classes of methods for solving linear systems –Direct  Backward substitutions  Gaussian elimination algorithm –Indirect  By approximation  Jacobi algorithm

-6- Backward Substitution  Used to solve the system Ax = b where A is a upper triangular matrix  Example 1x 1 + 1x 2 – 1x 3 + 4x 4 = 8 –2x 2 – 3x 3 + 1x 4 = 5 2x 3 – 3x 4 = 0 2x 4 = 4  The time to solve a linear system using backward substitution is O(n 2 )

-7- Backward Substitution Algorithm n: size of system a[1..n][1..n]: matrix A b[1..n]: vector b x[1..n]: vector x begin for i = n down to 1 do x[i] = b[i]/a[i][i] for j = 1 to i – 1 do b[j] = b[j] – x[i]*a[j][i] end for end

-8- Parallelizing Backward Substitution(1) 1x 1 + 1x 2 – 1x 3 + 4x 4 = 8 –2x 2 – 3x 3 + 1x 4 = 5 2x 3 – 3x 4 = 0 2x 4 = 4 begin for i = n down to 1 do x[i] = b[i]/a[i][i] for j = 1 to i – 1 do b[j] = b[j] – x[i]*a[j][i] end for end

-9- Parallelizing Backward Substitution(2)  A processor can be assigned with a number of equations  Once a variable is solved, it is broadcasted to other processors to calculate unsolved variables  A good parallelization strategy is the one that can divide the load on each processor equally and reduce the overhead of message passing

-10- Gaussian Elimination (1)  Reduce a general Ax = b system to Tx = c system, where T is an upper triangular matrix  Using principle: a row can be replaced by the sum of that row an a none zero multiple of any row of the system  The selected row for multiplication is call pivot row  Then, apply Backward substitution algorithm to solve the system  Example:

-11- Gaussian Elimination (2)  Original system  Step 1  Step 2

-12- Gaussian Elimination (3)  Complexity of Gaussian Elimination is O(n 3 )  To have good numerical stability, partial pivoting is used –At step i (drive to zero all nonezero values of column i of rows below row i). –Select the row from row i upward that has the largest absolute value at column i –Swap selected row with row i

-13- Gaussian Elimination Sequential Algorithm i := 1 j := 1 while (i ≤ n and j ≤ n) do Find pivot in column j, starting in row i: maxi := i for k := i+1 to n do if abs(A[k,j]) > abs(A[maxi,j]) then maxi := k end if end for if A[maxi,j] ≠ 0 then swap rows i and maxi, but do not change the value of i divide each entry in row i by A[i,j] for u := i+1 to n do subtract A[u,j] * row i from row u end for i := i + 1 end if j := j + 1 end while

-14- Parallelize Gaussian Elimination  Each processor can be assigned with a number of rows of the system  If partial pivoting is used –The selection of pivoting row has to be done across processors –The pivot row needs to be broadcasted to all other processors  Assignment of rows to processors should be done in a way that backward substitution algorithm can be used straight away without re-allocating the work

-15- Jacobi Algorithm  An iterative method by estimating the values of variables after a number of iterations  At iterative t + 1, variable x i is estimated by the following equation  Stop iterating when the greatest difference of newly estimated values of variables and the old values is smaller than some threshold  If the calculation does not converge, there is no solution found

-16- Sequential Implementation of Jacobi Algorithm (1)  Input n: size of the system epsilon: convergence threshold a[1..n][1..n]: matrix A b[1..n]: vector b  Output x[1..n]: old estimate of solution vector newx[1..n]: new estimate of solution vector diff: maximum difference after one iteration

-17- Sequential Implementation of Jacobi Algorithm (2) begin for i=1 to n do x[i] = b[i]/a[i][i] //initial estimation end for do diff = 0 for i=1 to n do newx[i] = b[i] for j =1 to n do if j !=i then newx[i] = new[i] – a[i][j]*x[j] end if end for newx[i] = newx[i]/a[i][i] end for for i=1 to n do diff = max(diff, abs(x[i] – newx[i]) x[i] = newx[i] end for while diff > epsilon end

-18- Parallelize Jacobi Algorithm  Each processor can be assigned with a number of variables for estimation  After each iteration, newly estimated values need to be broadcasted to all processors

-19- Conclusion  For the assignment, either of direct or iterative methods can be implemented  Corresponding sequential algorithm has to be implemented to calculate speed up and efficiency  Read Chapter 9: Solving Linear Systems of “Parallel Computing: Theory and Practice” of Michael J. Quinn for more detail