Presentation is loading. Please wait.

Presentation is loading. Please wait.

Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.

Similar presentations


Presentation on theme: "Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of."— Presentation transcript:

1 Monica Garika Chandana Guduru

2 METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of linear programming Iterative method Jacobi method Gauss-Seidel method Multi-grid method Conjugate gradient method

3 Conjugate Gradient method The CG is an algorithm for the numerical solution of particular system of linear equations Ax=b. Where A is symmetric i.e., A = A T and positive definite i.e., x T * A * x > 0 for all nonzero vectors If A is symmetric and positive definite then the function Q(x) = ½ x`Ax – x`b + c

4 Conjugate gradient method Conjugate gradient method builds up a solution x*€ R n in at most n steps in the absence of round off errors. Considering round off errors more than n steps may be needed to get a good approximation of exact solution x* For sparse matrices a good approximation of exact solution can be achieved in less than n steps in also with round off errors.

5 In oil reservoir simulation, The number of linear equations corresponds to the number of grids of a reservoir The unknown vector x is the oil pressure of reservoir Each element of the vector x is the oil pressure of a specific grid of the reservoir Practical Example

6 Linear System Square matrix Unknown vector (what we want to find) Known vector

7 Matrix Multiplication

8 Positive Definite Matrix > 0 [ x 1 x 2 … x n ]

9 Procedure Finding the initial guess for solution 0 Generates successive approximations to 0 Generates residuals Searching directions

10 x 0 = 0, r 0 = b, p 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (p T k-1 Ap k-1 ) step length x k = x k-1 + α k p k-1 approximate solution r k = r k-1 – α k Ap k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement p k = r k + β k p k-1 search direction Conjugate gradient iteration

11 Iteration of conjugate gradient method is of the form x(t) = x(t-1) + s(t)d(t) where, x(t) is function of old value of vector x s(t) is scalar step size x d(t) is direction vector Before first iteration, values of x(0), d(0) and g(0) must be set

12 Steps to find conjugate gradient method Every iteration t calculates x(t) in four steps : Step 1: Compute the gradient g(t) = Ax(t-1) – b Step 2: Compute direction vector d(t) = - g(t) + [g(t)` g(t) / g(t-1)` g(t-1)] d(t-1) Step 3: Compute step size s(t) = [- d(t)` g(t)]/d(t)’ A d(t)] Step 4: Compute new approximation of x x(t) = x(t-1) + s(t)d(t)

13 Sequential Algorithm 1) 0 = 0 2) 0 := − 0 3) 0 := 0 4) := 0 5) := maximum number of iterations to be done 6) if < then perform 8 to 16 7) if = then exit 8) calculate = k 9) α k : = r k T r k /p T k v 10) k+1 := k + a k p k 11) k+1 := k − a k v 12) if k+1 is sufficiently small then go to 16 end if 13) k :=( r T k+1 r k+1 )/(r T k k ) 14) k+1 := k+1 +β k p k 15) := + 1 16) = k +1

14 Complexity analysis To Identify Data Dependencies To identify eventual communications Requires large number of operations As number of equations increases complexity also increases.

15 Why To Parallelize? Parallelizing conjugate gradient method is a way to increase its performance Saves memory because processors only store the portions of the rows of a matrix A that contain non- zero elements. It executes faster because of dividing a matrix into portions

16 How to parallelize? For example, choose a row-wise block striped decomposition of A and replicate all vectors Multiplication of A and vector may be performed without any communication But all-gather communication is needed to replicate the result vector Overall time complexity of parallel algorithm is Θ(n^2 * w / p + nlogp)

17 Row wise Block Striped Decomposition of a Symmetrically Banded Matrix

18 Dependency Graph in CG

19 Algorithm of a Parallel CG on each Computing Worker (cw) 1) Receive 0,, 0 2) 0 = 0 3) := 0 4) := maximum number of iterations to be done 5) if < then perform 8 to 16 6) if = then exit 7) = 8) 9) 10) Send, 11) Receive 12) +1 = + 13) Compute Partial Result of +1: +1 = − 14) Send +1, +1 15) Receive 16) if = ℎ go to 23 17) 18) 19) Send, 20) Receive 21) +1 = +1 + 22) := + 1 23) Result reached

20 Speedup of Parallel CG on Grid Versus Sequential CG on Intel

21 Communication and Waiting Time of the Parallel CG on Grid

22 We consider the difference between f at the solution x and any other vector p:

23 Parallel Computation Design – Iterations of the conjugate gradient method can be executed only in sequence, so the most advisable approach is to parallelize the computations, that are carried out at each iteration The most time-consuming computations are the multiplication of matrix A by the vectors x and d – Additional operations, that have the lower computational complexity order, are different vector processing procedures (inner product, addition and subtraction, multiplying by a scalar). While implementing the parallel conjugate gradient method, it can be used parallel algorithms of matrix-vector multiplication, Parallel Computation Design

24 “ Pure” Conjugate Gradient Method (Quadratic Case) 0 - Starting at any x 0 define d 0 = -g 0 = b - Q x 0, where g k is the column vector of gradients of the objective function at point f(x k ) 1 - Using d k, calculate the new point x k+1 = x k +  k d k, where 2 - Calculate the new conjugate gradient direction d k+1, according to: d k+1 = - g k+1 +  k d k where T  k = - g k d k d k T Qd k  k = g k+1 T Qd k d k T k

25 ADVANTAGES Advantages: 1)Gradient is always nonzero and linearly independent of all previous direction vectors. 2)Simple formula to determine the new direction. Only slightly more complicated than steepest descent. 3)Process makes good progress because it is based on gradients.

26 ADVANTAGES Attractive are the simple formulae for updating the direction vector. Method is slightly more complicated than steepest descent, but converges faster. Conjugate gradient method is an Indirect Solver Used to solve large systems Requires small amount of memory It doesn’t require numerical linear algebra

27 Conclusion Conjugate gradient method is a linear solver tool which is used in a wide range of engineering and sciences applications. However, conjugate gradient has a complexity drawback due to the high number of arithmetic operations involved in matrix vector and vector-vector multiplications. Our implementation reveals that despite the communication cost involved in a parallel CG, a performance improvement compared to a sequential algorithm is still possible.

28 References Parallel and distributed computing systems by Dimitri P. Bertsekas, John N.Tsitsiklis Parallel programming for multicore and cluster systems by Thomas Rauber,Gudula Runger Scientific computing. An introduction with parallel computing by Gene Golub and James M.Ortega Parallel computing in C with Openmp and mpi by Michael J.Quinn Jonathon Richard Shewchuk, ”An Introduction to the Conjugate Gradient Method Without the Agonizing Pain”, School of Computer Science, Carnegie Mellon University, Edition 1 1/4


Download ppt "Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of."

Similar presentations


Ads by Google