Presentation is loading. Please wait.

Presentation is loading. Please wait.

Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006.

Similar presentations


Presentation on theme: "Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006."— Presentation transcript:

1 Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006

2 2 Overview Models of parallel computation Characteristics of SIMD models Design issue for network SIMD models The mesh and the hypercube architectures Classification of the PRAM model Matrix multiplication on the EREW PRAM

3 3 Models of parallel computation Parallel computational models can be broadly classified into two categories, Single Instruction Multiple Data (SIMD) Multiple Instruction Multiple Data (MIMD)

4 4 Models of parallel computation SIMD models are used for solving problems which have regular structures. We will mainly study SIMD models in this course. MIMD models are more general and used for solving problems which lack regular structures.

5 5 SIMD models An N- processor SIMD computer has the following characteristics : Each processor can store both program and data in its local memory. Each processor stores an identical copy of the same program in its local memory.

6 6 SIMD models At each clock cycle, each processor executes the same instruction from this program. However, the data are different in different processors. The processors communicate among themselves either through an interconnection network or through a shared memory.

7 7 Design issues for network SIMD models A network SIMD model is a graph. The nodes of the graph are the processors and the edges are the links between the processors. Since each processor solves only a small part of the overall problem, it is necessary that processors communicate with each other while solving the overall problem.

8 8 Design issues for network SIMD models The main design issues for network SIMD models are communication diameter, bisection width, and scalability. We will discuss two most popular network models, mesh and hypercube in this lecture.

9 9 Communication diameter Communication diameter is the diameter of the graph that represents the network model. The diameter of a graph is the longest distance between a pair of nodes. If the diameter for a model is d, the lower bound for any computation on that model is Ω( d ).

10 10 Communication diameter The data can be distributed in such a way that the two furthest nodes may need to communicate.

11 11 Communication diameter Communication between two furthest nodes takes Ω( d ) time steps.

12 12 Bisection width The bisection width of a network model is the number of links to be removed to decompose the graph into two equal parts. If the bisection width is large, more information can be exchanged between the two halves of the graph and hence problems can be solved faster.

13 13 Dividing the graph into two parts. Bisection width

14 14 Scalability A network model must be scalable so that more processors can be easily added when new resources are available. The model should be regular so that each processor has a small number of links incident on it.

15 15 Scalability If the number of links is large for each processor, it is difficult to add new processors as too many new links have to be added. If we want to keep the diameter small, we need more links per processor. If we want our model to be scalable, we need less links per processor.

16 16 Diameter and Scalability The best model in terms of diameter is the complete graph. The diameter is 1. However, if we need to add a new node to an n - processor machine, we need n - 1 new links.

17 17 Diameter and Scalability The best model in terms of scalability is the linear array. We need to add only one link for a new processor. However, the diameter is n for a machine with n processors.

18 18 The mesh architecture Each internal processor of a 2-dimensional mesh is connected to 4 neighbors. When we combine two different meshes, only the processors on the boundary need extra links. Hence it is highly scalable.

19 19 Both the diameter and bisection width of an n - processor, 2-dimensional mesh is O( √n). A 4 x 4 mesh The mesh architecture

20 20 Hypercubes of 0, 1, 2 and 3 dimensions The hypercube architecture

21 21 Each node of a d -dimensional hypercube is numbered using d bits. Hence, there are 2 d processors in a d -dimensional hypercube. Two nodes are connected by a direct link if their numbers differ only by one bit. The hypercube architecture

22 22 The diameter of a d -dimensional hypercube is d as we need to flip at most d bits (traverse d links) to reach one processor from another. The bisection width of a d -dimensional hypercube is 2 d-1. The hypercube architecture

23 23 The hypercube is a highly scalable architecture. Two d -dimensional hypercubes can be easily combined to form a d+1 - dimensional hypercube. The hypercube has several variants like butterfly, shuffle-exchange network and cube- connected cycles. The hypercube architecture

24 24 Adding n numbers in √n steps Adding n numbers on the mesh

25 25 Adding n numbers in log n steps Adding n numbers on the hypercube


Download ppt "Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006."

Similar presentations


Ads by Google