Presentation is loading. Please wait.

Presentation is loading. Please wait.

Types of Parallel Computers

Similar presentations


Presentation on theme: "Types of Parallel Computers"— Presentation transcript:

1 Types of Parallel Computers
Two principal types: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.

2 Shared Memory Multiprocessor

3 Conventional Computer
Consists of a processor executing a program stored in a (main) memory: Each main memory location located by its address. Addresses start at 0 and extend to 2b - 1 when there are b bits (binary digits) in address. Main memory Instr uctions (to processor) Data (to or from processor) Processor

4 Shared Memory Multiprocessor System
Natural way to extend single processor model - have multiple processors connected to multiple memory modules, such that each processor can access any memory module : Memory module One address space Processor-memory Interconnections Processors

5 Simplistic view of a small shared memory multiprocessor
Processors Shared memory Bus Examples: Dual Pentiums Quad Pentiums

6 Real computer system have cache memory between the main memory and processors. Level 1 (L1) cache and Level 2 (L2) cache Example Quad Shared Memory Multiprocessor Processor Processor Processor Processor L1 cache L1 cache L1 cache L1 cache L2 Cache L2 Cache L2 Cache L2 Cache Bus interface Bus interface Bus interface Bus interface Processor/ memory b us I/O interf ace Memory controller I/O b us Memory Shared memory

7 “Recent” innovation Dual-core and multi-core processors
Two or more independent processors in one package Actually an old idea but not put into wide practice until recently. Since L1 cache is usually inside package and L2 cache outside package, dual-/multi-core processors usually share L2 cache.

8 Example Dual core Pentiums (Intel CoreTM2 Dual processors) -- Two processors in one package sharing a common L2 Cache. Introduced April (Also hyper-threaded) Xbox 360 game console -- triple core PowerPC microprocessor. PlayStation 3 Cell processor -- 9 core design. References and more information:

9 Programming Shared Memory Multiprocessors Several possible ways
1. Use Threads - programmer decomposes program into individual parallel sequences, (threads), each being able to access shared variables declared outside threads. Example Pthreads 2. Use library functions and preprocessor compiler directives with a sequential programming language to declare shared variables and specify parallelism. Example OpenMP - industry standard. Consists of library functions, compiler directives, and environment variables - needs OpenMP compiler

10 3. Use a modified sequential programming language -- added syntax to declare shared variables and specify parallelism. Example UPC (Unified Parallel C) - needs a UPC compiler. 4. Use a specially designed parallel programming language -- with syntax to express parallelism. Compiler automatically creates executable code for each processor (not now common). 5. Use a regular sequential programming language such as C and ask parallelizing compiler to convert it into parallel executable code. Also not now common.

11 Message-Passing Multicomputer
Complete computers connected through an interconnection network: Interconnection network Messages Processor Local memory Computers

12 Interconnection Networks
Limited and exhaustive interconnections 2- and 3-dimensional meshes Hypercube (not now common) Using Switches: Crossbar Trees Multistage interconnection networks

13 Two-dimensional array (mesh)
Computer/ Links processor Also three-dimensional - used in some large high performance systems.

14 Three-dimensional hypercube

15 Four-dimensional hypercube
Hypercubes popular in 1980’s - not now

16 Crossbar switch Memor ies Processors Switches

17 Tree Root Switch Links element Processors

18 Multistage Interconnection Network Example: Omega network
2 2 switch elements (straight-through or crossover connections) 000 000 001 001 010 010 011 011 Inputs Outputs 100 100 101 101 110 110 111 111

19 Networked Computers as a Computing Platform
A network of computers became a very attractive alternative to expensive supercomputers and parallel computer systems for high-performance computing in early 1990’s. Several early projects. Notable: Berkeley NOW (network of workstations) project. NASA Beowulf project.

20 Key advantages: Very high performance workstations and PCs readily available at low cost. The latest processors can easily be incorporated into the system as they become available. Existing software can be used or modified.

21 Beowulf Clusters* A group of interconnected “commodity” computers achieving high performance with low cost. Typically using commodity interconnects - high speed Ethernet, and Linux OS. * Beowulf comes from name given by NASA Goddard Space Flight Center cluster project.

22 Cluster Interconnects
Originally fast Ethernet on low cost clusters Gigabit Ethernet - easy upgrade path More Specialized/Higher Performance Myrinet Gbits/sec - disadvantage: single vendor cLan SCI (Scalable Coherent Interface) QNet Infiniband - may be important as infininband interfaces may be integrated on next generation PCs

23 Dedicated cluster with a master node and compute nodes
User Computers Dedicated Cluster External network Ethernet interface Master node Switch Local network Compute nodes

24 Software Tools for Clusters
Based upon Message Passing Parallel Programming: Parallel Virtual Machine (PVM) - developed in late 1980’s. Became very popular. Message-Passing Interface (MPI) - standard defined in 1990s. Both provide a set of user-level libraries for message passing. Use with regular programming languages (C, C++, ...).


Download ppt "Types of Parallel Computers"

Similar presentations


Ads by Google