Parallel & Cluster Computing Clusters & Distributed Parallelism Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Parallelism Lecture notes from MKP and S. Yalamanchili.
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
CSE431 Chapter 7A.1Irwin, PSU, 2008 CSE 431 Computer Architecture Fall 2008 Chapter 7A: Intro to Multiprocessor Systems Mary Jane Irwin (
Distributed Systems CS
SE-292 High Performance Computing
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
N-Body I CS 170: Computing for the Sciences and Mathematics.
High Performance Computing and CyberGIS Keith T. Weber, GISP GIS Director, ISU.
2 Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.
Parallel & Cluster Computing Distributed Cartesian Meshes Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Types of Parallel Computers
Reference: Message Passing Fundamentals.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 5, 2005 Lecture 2.
Memory Management 2010.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Storage area network and System area network (SAN)
Parallel Programming & Cluster Computing Shared Memory Multithreading Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
void ordered_fill (float* array, int array_length) { int index; for (index = 0; index < array_length; index++) { array[index] = index; }
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Parallel Programming & Cluster Computing Shared Memory Multithreading Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
18-447: Computer Architecture Lecture 30B: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013.
Supercomputing in Plain English Overview: What the Heck is Supercomputing? Henry Neeman, Director OU Supercomputing Center for Education & Research Blue.
Parallel & Cluster Computing Overview of Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Lecture 8: Design of Parallel Programs Part III Lecturer: Simon Winberg.
1 Introduction to Parallel Computing. 2 Multiprocessor Architectures Message-Passing Architectures –Separate address space for each processor. –Processors.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Cyberinfrastructure for Distributed Rapid Response to National Emergencies Henry Neeman, Director Horst Severini, Associate Director OU Supercomputing.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
IT253: Computer Organization Lecture 3: Memory and Bit Operations Tonga Institute of Higher Education.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
PARALLEL APPLICATIONS EE 524/CS 561 Kishore Dhaveji 01/09/2000.
Parallel Programming & Cluster Computing Overview of Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
Parallel Programming & Cluster Computing Shared Memory Multithreading Henry Neeman, Director OU Supercomputing Center for Education & Research University.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
Supercomputing in Plain English Shared Memory Multithreading
Supercomputing in Plain English Shared Memory Multithreading Blue Waters Undergraduate Petascale Education Program May 23 – June
Parallelizing Spacetime Discontinuous Galerkin Methods Jonathan Booth University of Illinois at Urbana/Champaign In conjunction with: L. Kale, R. Haber,
Parallel Programming & Cluster Computing Shared Memory Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Parallel & Cluster Computing 2005 Supercomputing Overview Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy, Contra Costa.
Supercomputing in Plain English What is a Cluster? Blue Waters Undergraduate Petascale Education Program May 23 – June
Threaded Programming Lecture 1: Concepts. 2 Overview Shared memory systems Basic Concepts in Threaded Programming.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Supercomputing and Science An Introduction to High Performance Computing Part V: Shared Memory Parallelism Henry Neeman, Director OU Supercomputing Center.
Parallel Computing Presented by Justin Reschke
Concurrency and Performance Based on slides by Henri Casanova.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Parallel Programming & Cluster Computing Monte Carlo Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
Background Computer System Architectures Computer System Software.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
Parallel & Cluster Computing Shared Memory Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Supercomputing in Plain English Shared Memory Multithreading
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Distributed Processors
Constructing a system with multiple computers or processors
EE 193: Parallel Computing
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
CSE8380 Parallel and Distributed Processing Presentation
Constructing a system with multiple computers or processors
CS703 – Advanced Operating Systems
Types of Parallel Computers
Presentation transcript:

Parallel & Cluster Computing Clusters & Distributed Parallelism Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College Henry Neeman, University of Oklahoma Charlie Peck, Earlham College National Computational Science Institute August

OU Supercomputing Center for Education & Research 2 Parallelism Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.

OU Supercomputing Center for Education & Research 3 What Is Parallel Computing? Parallel computing is the use of multiple processors to solve a problem, and in particular the use of multiple processors operating concurrently on different parts of a problem. The different parts could be different tasks, or the same task on different pieces of the problem’s data.

OU Supercomputing Center for Education & Research 4 Kinds of Parallelism Shared Memory Multithreading Distributed Memory Multiprocessing  this workshop! Hybrid Shared/Distributed Parallelism

OU Supercomputing Center for Education & Research 5 Why Parallelism Is Good The Trees: We like parallelism because, as the number of processors working on a problem grows, we can solve the same problem in less time. The Forest: We like parallelism because, as the number of processors working on a problem grows, we can solve bigger problems.

OU Supercomputing Center for Education & Research 6 Parallelism Jargon Threads: execution sequences that share a single memory area (“address space”) and can see each other’s memory Processes: execution sequences with their own independent, private memory areas  this workshop! … and thus: Multithreading: parallelism via multiple threads Multiprocessing: parallelism via multiple processes As a general rule, Shared Memory Parallelism is concerned with threads, and Distributed Parallelism (this workshop!) is concerned with processes.

OU Supercomputing Center for Education & Research 7 Jargon Alert In principle: “shared memory parallelism”  “multithreading” “distributed parallelism”  “multiprocessing” In practice, these terms are often used interchangeably: Parallelism Concurrency (not as popular these days) Multithreading Multiprocessing Typically, you have to figure out what is meant based on the context.

Distributed Multiprocessing: The Desert Islands Analogy

OU Supercomputing Center for Education & Research 9 An Island Hut Imagine you’re on an island in a little hut. Inside the hut is a desk. On the desk is a phone, a pencil, a calculator, a piece of paper with numbers, and a piece of paper with instructions.

OU Supercomputing Center for Education & Research 10 Instructions The instructions are split into two kinds: Arithmetic/Logical: e.g., Add the 27 th number to the 239 th number Compare the 96 th number to the 118 th number to see whether they are equal Communication: e.g., dial and leave a voic containing the 962 nd number call your voic box and collect a voic from and put that number in the 715 th slot

OU Supercomputing Center for Education & Research 11 Is There Anybody Out There? If you’re in a hut on an island, you aren’t specifically aware of anyone else. Especially, you don’t know whether anyone else is working on the same problem as you are, and you don’t know who’s at the other end of the phone line. All you know is what to do with the voic s you get, and what phone numbers to send voic s to.

OU Supercomputing Center for Education & Research 12 Someone Might Be Out There Now suppose that Dave is on another island somewhere, in the same kind of hut, with the same kind of equipment. Suppose that he has the same list of instructions as you, but a different set of numbers (both data and phone numbers). Like you, he doesn’t know whether there’s anyone else working on his problem.

OU Supercomputing Center for Education & Research 13 Even More People Out There Now suppose that Paul and Tom are also in huts on islands. Suppose that each of the four has the exact same list of instructions, but different lists of numbers. And suppose that the phone numbers that people call are each others’. That is, your instructions have you call Dave, Paul and Tom, Dave’s has him call Paul, Tom and you, and so on. Then you might all be working together on the same problem.

OU Supercomputing Center for Education & Research 14 All Data Are Private Notice that you can’t see Dave’s or Paul’s or Tom’s numbers, nor can they see yours or each other’s. Thus, everyone’s numbers are private: there’s no way for anyone to share numbers, except by leaving them in voic s.

OU Supercomputing Center for Education & Research 15 Long Distance Calls: 2 Costs When you make a long distance phone call, you typically have to pay two costs: Connection charge: the fixed cost of connecting your phone to someone else’s, even if you’re only connected for a second Per-minute charge: the cost per minute of talking, once you’re connected If the connection charge is large, then you want to make as few calls as possible.

OU Supercomputing Center for Education & Research 16 Like Desert Islands Distributed parallelism is very much like the Desert Islands analogy: Processors are independent of each other. All data are private. Processes communicate by passing messages (like voic s). The cost of passing a message is split into: latency (connection time: time to send a message of length 0 bytes); bandwidth (time per byte).

What is a Cluster?

OU Supercomputing Center for Education & Research 18 What is a Cluster? “… [W]hat a ship is … It's not just a keel and hull and a deck and sails. That's what a ship needs. But what a ship is... is freedom.” – Captain Jack Sparrow “Pirates of the Caribbean” [1]

OU Supercomputing Center for Education & Research 19 What a Cluster is …. A cluster needs of a collection of small computers, called nodes, hooked together by an interconnection network (or interconnect for short). It also needs software that allows the nodes to communicate over the interconnect. But what a cluster is … is the relationships among these components.

OU Supercomputing Center for Education & Research 20 What’s in a Cluster? These days, many clusters are made of nodes that are basically Linux PCs, made out of the same hardware components as you’d find in your own PC. As for interconnects, it’s quite common for clusters these days to use regular old Ethernet (typically Gigabit), but there are also several high performance interconnects on the market: Dolphin Wulfkit (SCI) Infiniband Myrinet Quadrics 10 Gigabit Ethernet

OU Supercomputing Center for Education & Research 21 An Actual Cluster Interconnect Nodes boomer.oscer.ou.edu

OU Supercomputing Center for Education & Research Pentium4 Xeon 2 GHz CPUs 270 GB RAM 8700 GB disk OS: Red Hat Linux Enterprise 3.0 Peak speed: 1.08 TFLOP/s * Programming model: distributed multiprocessing * TFLOP/s: trillion floating point calculations per second Details about boomer

OU Supercomputing Center for Education & Research 23 More boomer Details Each node has 2 Pentium4 XeonDP CPUs (2.0 GHz) 2 GB RAM 1 or more hard drives (mostly EIDE, a few SCSI) 100 Mbps Ethernet (cheap backup interconnect) Myrinet2000 (high performance interconnect) Breakdown of nodes 135 compute nodes 8 storage nodes 2 “head” nodes (where you log in, compile, etc) 1 management node (batch system, LDAP, etc)

Parallelism Issues

OU Supercomputing Center for Education & Research 25 Load Balancing Suppose you have a distributed parallel code, but one processor does 90% of the work, and all the other processors share 10% of the work. Is it a big win to run on 1000 processors? Now suppose that each processor gets exactly 1/N p of the work, where N p is the number of processors. Now is it a big win to run on 1000 processors?

OU Supercomputing Center for Education & Research 26 Load Balancing RECALL: load balancing means giving everyone roughly the same amount of work to do.

OU Supercomputing Center for Education & Research 27 Load Balancing Is Good When every processor gets the same amount of work, the job is load balanced. We like load balancing, because it means that our speedup can potentially be linear: if we run on N p processors, it takes 1/N p as much time as on one. For some codes, figuring out how to balance the load is trivial (e.g., breaking a big unchanging array into sub-arrays). For others, load balancing is very tricky (e.g., a dynamically evolving collection of arbitrarily many blocks of arbitrary size).

OU Supercomputing Center for Education & Research 28 Load Balancing Load balancing can be easy, if the problem splits up into chunks of roughly equal size, with one chunk per processor. Or load balancing can be very hard.

OU Supercomputing Center for Education & Research 29 Amdahl’s Law In 1967, Gene Amdahl came up with an idea so crucial to our understanding of parallelism that they named a Law for him: where S is the overall speedup achieved by parallelizing a code, F p is the fraction of the code that’s parallelizable, and S p is the speedup achieved in the parallel part. [2] Huh?

OU Supercomputing Center for Education & Research 30 Amdahl’s Law: Huh? What does Amdahl’s Law tell us? Well, imagine that you run your code on a zillion processors. The parallel part of the code could exhibit up to a factor of a zillion speedup. For sufficiently large values of a zillion, the parallel part would take virtually zero time! But, the serial (non-parallel) part would take the same amount of time as on a single processor. So running your code on infinitely many processors would still take at least as much time as it takes to run just the serial part.

OU Supercomputing Center for Education & Research 31 Max Speedup by Serial Fraction

OU Supercomputing Center for Education & Research 32 Amdahl’s Law Example PROGRAM amdahl_test IMPLICIT NONE REAL,DIMENSION(a_lot) :: array REAL :: scalar INTEGER :: index READ *, scalar !! Serial part DO index = 1, a_lot !! Parallel part array(index) = scalar * index END DO !! index = 1, a_lot END PROGRAM amdahl_test If we run this program on infinitely many CPUs, then the total run time will still be at least as much as the time it takes to perform the READ.

OU Supercomputing Center for Education & Research 33 The Point of Amdahl’s Law Rule of Thumb: When you write a parallel code, try to make as much of the code parallel as possible, because the serial part will be the limiting factor on parallel speedup. Note that this rule will not hold when the overhead cost of parallelizing exceeds the parallel speedup. More on this presently.

OU Supercomputing Center for Education & Research 34 Speedup The goal in parallelism is linear speedup: getting the speed of the job to increase by a factor equal to the number of processors. Very few programs actually exhibit linear speedup, but some come close.

OU Supercomputing Center for Education & Research 35 Scalability Platinum = NCSA 1024 processor PIII/1GHZ Linux Cluster Note: NCSA Origin timings are scaled from 19x19x53 domains. Scalable means “performs just as well regardless of how big the problem is.” A scalable code has near linear speedup. ARPS: Advanced Regional Prediction System (weather forecasting)

OU Supercomputing Center for Education & Research 36 Granularity Granularity is the size of the subproblem that each process works on, and in particular the size that it works on between communicating or synchronizing with the others. Some codes are coarse grain (a few very big parallel parts) and some are fine grain (many little parallel parts). Usually, coarse grain codes are more scalable than fine grain codes, because less time is spent managing the parallelism, so more is spent getting the work done.

OU Supercomputing Center for Education & Research 37 Parallel Overhead Parallelism isn’t free. Behind the scenes, the compiler and the hardware have to do a lot of work to make parallelism happen – and this work takes time. This time is called parallel overhead. The overhead typically includes: Managing the multiple processes Communication between processes Synchronization (described later)

OU Supercomputing Center for Education & Research 38 References [1] © Disney Image from [2] Amdahl, G.M. “Validity of the single-processor approach to achieving large scale computing capabilities.” In AFIPS Conference Proceedings vol. 30 (Atlantic City, N.J., Apr ). AFIPS Press, Reston, Va., 1967, pp Cited in