Presentation is loading. Please wait.

Presentation is loading. Please wait.

What is Parallel and Distributed computing?

Similar presentations


Presentation on theme: "What is Parallel and Distributed computing?"— Presentation transcript:

1 What is Parallel and Distributed computing?
Solving a single problem faster using multiple CPUs E.g. Matrix Multiplication C = A X B Parallel = Shared Memory among all CPUs Distributed = Local Memory/CPU Common Issues: Partition, Synchronization, Dependencies, load balancing

2 Why Parallel and Distributed Computing?
Grand Challenge Problems Weather Forecasting; Global Warming Materials Design – Superconducting material at room temperature; nano-devices; spaceships. Organ Modeling; Drug Discovery

3 Why Parallel and Distributed Computing?
Physical Limitations of Circuits Heat and light effect Superconducting material to counter heat effect Speed of light effect – no solution!

4 Microprocessor Revolution
Micros Speed (log scale) Time Supercomputers Moore's Law Mainframes Minis

5 Why Parallel and Distributed Computing?
VLSI – Effect of Integration 1 M transistor enough for full functionality - Dec’s Alpha (90’s) Rest must go into multiple CPUs/chip Cost – Multitudes of average CPUs gave better FLPOS/$ compared to traditional supercomputers

6 Modern Parallel Computers
Caltech’s Cosmic Cube (Seitz and Fox) Commercial copy-cats nCUBE Corporation (512 CPUs) Intel’s Supercomputer Systems iPSC1, iPSC2, Intel Paragon (512 CPUs) Lots more Thinking Machines Corporation CM2 (65K 4-bit CPUs) – 12-dimensional hypercube - SIMD CM5 – fat-tree interconnect - MIMD Roadrunner - Los Alamos NL 116,640 cores 12K IBM cell; Japan’s K-1; China’s Tianhe I-A

7

8 Instruction Cycle Instruction cycle broken into 4 stages:
Instruction fetch Fetch & decode instruction, obtain any operands, update PC Execute Execute arithmetic instruction, compute branch target address, compute memory address Memory access Access memory for load or store instruction; fetch instruction at target of branch instruction Store results Write instruction results back to register file

9 Pipelining SPARC is a RISC machine – want to complete one instruction per cycle Overlap stages of different instructions to achieve parallel execution Can obtain a speedup by a factor of 4 Hardware does not have to run 4 times faster – break h/w into 4 parts to run concurrently

10 Pipelining Sequential: each h/w stage idle 75% of the time. timeex = 4 * i Parallel: each h/w stage working after filling the pipeline. timeex = 3 + i

11 Why Parallel and Distributed Computing?
Everyday Reasons Available local networked workstations and Grid resources should be utilized Solve compute-intensive problems faster Make infeasible problems feasible Reduce design time Leverage of large combined memory Solve larger problems in same amount of time Improve answer’s precision Gain competitive advantage Exploit commodity multi-core and GPU chips

12 Why MPI/PVM? MPI = “Message Passing Interface”
PVM = “Parallel Virtual Machine” Standard specification for message-passing libraries Libraries available on virtually all parallel computers Free libraries also available for networks of workstations, commodity clusters, Linux, Unix, and Windows platforms Can program in C, C++, and Fortran

13 Why Shared Memory programming?
Easier conceptual environment Programmers typically familiar with concurrent threads and processes sharing address space CPUs within multi-core chips share memory OpenMP an application programming interface (API) for shared-memory systems Supports higher performance parallel programming of symmetrical multiprocessors


Download ppt "What is Parallel and Distributed computing?"

Similar presentations


Ads by Google