Supercomputing and Science An Introduction to High Performance Computing Part I: Overview Henry Neeman.

Slides:



Advertisements
Similar presentations
Supercomputing in Plain English An Introduction to High Performance Computing Part I: Overview: What the Heck is Supercomputing? Henry Neeman, Director.
Advertisements

Topics Parallel Computing Shared Memory OpenMP 1.
2 Less fish … More fish! Parallelism means doing multiple things at the same time: you can get more work done in the same time.
Chapter 5 Computing Components. The (META) BIG IDEA Cool, idea but maybe too big DATA – Must be stored somewhere in a storage device PROCESSING – Data.
Chapter 5: Computer Systems Organization Invitation to Computer Science, Java Version, Third Edition.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
Supercomputing in Plain English An Introduction to High Performance Computing Henry Neeman, Director OU Supercomputing Center for Education & Research.
Computer Applications NCBS Stage 1. The Central Processing UnitSlide 2Computer Applications Stage 1 Course Content and Assessment Practical – 60% (2 Hrs.
Computer Systems CS208. Major Components of a Computer System Processor (CPU) Runs program instructions Main Memory Storage for running programs and current.
Instructions Slides 3,4,5 are general questions that you should be able to answer. Use slides 6-27 to answer the questions. Write your answers in a separate.
The Study of Computer Science Chapter 0 Intro to Computer Science CS1510, Section 2.
Introduction to Programming Dr Masitah Ghazali Programming Techniques I SCJ1013.
Aug CMSC 104, LECT-021 Machine Architecture and Number Systems Some material in this presentation is borrowed form Adrian Ilie From The UNIVERSITY.
void ordered_fill (float* array, int array_length) { int index; for (index = 0; index < array_length; index++) { array[index] = index; }
Supercomputing in Plain English Overview: What the Heck is Supercomputing? Henry Neeman, Director OU Supercomputing Center for Education & Research Blue.
Georgia Institute of Technology Introduction to Programming Part 2 Barb Ericson Georgia Institute of Technology May 2006.
Computer Processing of Data
The Computer Systems By : Prabir Nandi Computer Instructor KV Lumding.
1 Machine Architecture and Number Systems Topics Major Computer Components Bits, Bytes, and Words The Decimal Number System The Binary Number System Converting.
Parallel Programming & Cluster Computing Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University.
What is a Computer ? Computers are Electronic Machines that process (performs calculation and manipulation) Data under the control of Set of Instructions.
CS 1308 Computer Literacy and the Internet Computer Systems Organization.
1 Introduction to Computers Prof. Sokol Computer and Information Science Brooklyn College.
Chapter 5: Computer Systems Organization Invitation to Computer Science, Java Version, Third Edition.
Computer Parts. Two Basic Parts Hardware & Software.
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman Director OU Supercomputing Center for Education & Research October
Parallel Programming & Cluster Computing Instruction Level Parallelism Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October.
Supercomputing in Plain English An Introduction to High Performance Computing Part III: Instruction Level Parallelism and Scalar Optimization Henry Neeman,
Supercomputing in Plain English Instruction Level Parallelism PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE YEAR.
GCSE Information Technology Computer Systems 2 Hardware is the name that is given to any part of a computer that you can actually touch. An individual.
Parallel Programming & Cluster Computing Overview of Parallelism Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education.
SKILL AREA: 1.2 MAIN ELEMENTS OF A PERSONAL COMPUTER.
Aug CMSC 104, LECT-021 Machine Architecture Some material in this presentation is borrowed form Adrian Ilie From The UNIVERSITY of NORTH CAROLINA.
CSC 110 – Intro to Computing Lecture 9: Computing Components.
CSCI-100 Introduction to Computing Hardware Part I.
Parallel & Cluster Computing Instruction Level Parallelism Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Parallel Programming & Cluster Computing An Overview of High Performance Computing Henry Neeman, University of Oklahoma Paul Gray, University of Northern.
Stored Programs In today’s lesson, we will look at: what we mean by a stored program computer how computers store and run programs what we mean by the.
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Machine Architecture and Number Systems
CS 1308 Computer Literacy and the Internet. Objectives In this chapter, you will learn about:  The components of a computer system  Putting all the.
Computer Architecture2  Computers are comprised of three things  The physical computer  The operating system  The user and programs running on the.
Computer Systems. Bits Computers represent information as patterns of bits A bit (binary digit) is either 0 or 1 –binary  “two states” true and false,
Parallel & Cluster Computing 2005 Supercomputing Overview Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy, Contra Costa.
Parallel Programming & Cluster Computing Instruction Level Parallelism Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
Hardware Information Created by Nasih 1. Hardware  The physical components of a computer system, including any peripheral equipment such as printers,
Supercomputing in Plain English Instruction Level Parallelism Henry Neeman, Director OU Supercomputing Center for Education & Research Blue Waters Undergraduate.
 A computer is an electronic device that receives data (input), processes data, stores data, and produces a result (output).  It performs only three.
Parallel & Cluster Computing 2005 Instruction Level Parallelism Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy, Contra.
Introduction to Programming. Key terms  CPU  I/O Devices  Main memory  Secondary memory  Operating system  User interface  Application  GUI 
Parallel Programming & Cluster Computing Overview: What the Heck is Supercomputing? Henry Neeman, University of Oklahoma Charlie Peck, Earlham College.
Computer Architecture and Number Systems
Supercomputing and Science
Introduction to Programming Part 2
What is a computer? Simply put, a computer is a sophisticated electronic calculating machine that: Accepts input information, Processes the information.
The Central Processing Unit
Introduction to Computer Architecture
Introduction to Computers
Machine Architecture and Number Systems
Supercomputing in Plain English
Introduction to Computers
Chapter 5: Computer Systems Organization
Introduction to Programming Part 2
Machine Architecture and Number Systems
Introduction to Computer Architecture
Notes from Last Class Office Hours: GL Accounts?
Supercomputing in Plain English
Hardware Information Created by Nasih.
Presentation transcript:

Supercomputing and Science An Introduction to High Performance Computing Part I: Overview Henry Neeman

OU Supercomputing Center for Education & Research2 Goals of These Workshops Introduce undergrads, grads, staff and faculty to supercomputing issues Provide a common language for discussing supercomputing issues when we meet to work on your research NOT: to teach everything you need to know about supercomputing – that can’t be done in a handful of hourlong workshops!

OU Supercomputing Center for Education & Research3 OSCER History Aug 2000: founding of OU High Performance Computing interest group Nov 2000: first meeting of OUHPC and OU Chief Information Officer Dennis Aebersold Feb 2001: meeting between OUHPC, CIO and VP for Research Lee Williams; draft white paper released Apr 2001: Henry named Director of HPC for Department of Information Technology July 2001: draft OSCER charter released

OU Supercomputing Center for Education & Research4 Friday August The OU Supercomputer Center for Education & Research is now open for business! Thanks to everyone who helped make this happen. Celebration 5:30 today at Brothers (Buchanan just north of Boyd).

OU Supercomputing Center for Education & Research5 What is Supercomputing? Supercomputing is the biggest, fastest computing right this minute. Likewise, a supercomputer is the biggest, fastest computer right this minute. So, the definition of supercomputing is constantly changing. The technical term for supercomputing is High Performance Computing (HPC).

OU Supercomputing Center for Education & Research6 What is HPC Used For? Numerical simulation of physical phenomena Data mining: finding nuggets of information in a vast sea of data Visualization: turning a vast sea of data into pictures that a scientist can understand … and lots of other stuff

OU Supercomputing Center for Education & Research7 What Is HPC Used For at OU? Astronomy Biochemistry Chemical Engineering Chemistry Civil Engineering Computer Science Electrical Engineering Industrial Engineering Geography Geophysics Mathematics Mechanical Engineering Meteorology Microbiology Physics Zoology Note: some of these aren’t using HPC yet, but plan to.

OU Supercomputing Center for Education & Research8 HPC Issues The tyranny of the storage hierarchy Parallelism: do many things at the same time Instruction-level parallelism: doing multiple operations at the same time (e.g., add, multiply, load and store simultaneously) Multiprocessing: multiple CPUs working on different parts of a problem at the same time Shared Memory Multiprocessing Distributed Multiprocessing Hybrid Multiprocessing

The Tyranny of the Storage Hierarchy

OU Supercomputing Center for Education & Research10 The Storage Hierarchy Registers Cache memory Main memory (RAM) Hard disk Removable media (e.g., CDROM) Internet Small, fast, expensive Big, slow, cheap

OU Supercomputing Center for Education & Research11 Henry’s Laptop Pentium III 700 MHz w/256 KB L2 Cache 256 MB RAM 30 GB Hard Drive DVD/CD-RW Drive 10/100 Mbps Ethernet 56 Kbps Phone Modem Dell Inspiron 4000 [1]

OU Supercomputing Center for Education & Research12 Typical Computer Hardware Central Processing Unit Primary storage Secondary storage Input devices Output devices

OU Supercomputing Center for Education & Research13 Central Processing Unit Also called CPU or processor: the “brain” Parts Control Unit: figures out what to do next -- e.g., whether to load data from memory, or to add two values together, or to store data into memory, or to decide which of two possible actions to perform (branching) Arithmetic/Logic Unit: performs calculations – e.g., adding, multiplying, checking whether two values are equal Registers: where data reside that are being used right now

OU Supercomputing Center for Education & Research14 Primary Storage Main Memory Also called RAM (“Random Access Memory”) Where data reside when they’re being used by a program that’s currently running Cache Small area of much faster memory Where data reside when they’ve been used recently and/or are about to be used Primary storage is volatile: values in primary storage disappear when power is turned off.

OU Supercomputing Center for Education & Research15 Secondary Storage Where data and programs reside that are going to be used in the future Secondary storage is non-volatile: values don’t disappear when power is turned off. Examples: hard disk, CDROM, DVD, magnetic tape, Zip, Jaz Many are portable: can pop out the CD/tape/Zip/floppy and take it with you

OU Supercomputing Center for Education & Research16 Input/Output Input devices – e.g., keyboard, mouse, touchpad, joystick, scanner Output devices – e.g., monitor, printer, speakers

OU Supercomputing Center for Education & Research17 Why Does Cache Matter? CPU Bottleneck The speed of data transfer between Main Memory and the CPU is much slower than the speed of calculating, so the CPU spends most of its time waiting for data to come in or go out.

OU Supercomputing Center for Education & Research18 Why Have Cache? CPU Cache is (typically) the same speed as the CPU, so the CPU doesn’t have to wait nearly as long for stuff that’s already in cache: it can do more operations per second!

OU Supercomputing Center for Education & Research19 Henry’s Laptop, Again Pentium III 700 MHz w/256 KB L2 Cache 256 MB RAM 30 GB Hard Drive DVD/CD-RW Drive 10/100 Mbps Ethernet 56 Kbps Phone Modem Dell Inspiron 4000 [1]

OU Supercomputing Center for Education & Research20 Storage Speed, Size, Cost Registers (Pentium III 700 MHz) Cache Memory (L2) Main Memory (100 MHz RAM) Hard Drive Ethernet (100 Mbps) CD-RWPhone Modem (56 Kbps) Speed (MB/sec) [peak] 5340 [2] (700 MFLOPS*) 11,200 [3] 800 [4] 100 [5] [6] Size (MB) 112 bytes** [7] ,000unlimited Cost ($/MB) ??? $400 [8] $1.17 [8] $0.009 [8] charged per month (typically) $ [9] free (local call) On Henry’s laptop * MFLOPS: millions of floating point operations per second ** 8 32-bit integer registers, 8 80-bit floating point registers

OU Supercomputing Center for Education & Research21 Storage Use Strategies Register reuse: do a lot of work on the same data before working on new data. Cache reuse: the program is much more efficient if all of the data and instructions fit in cache; if not, try to use what’s in cache a lot before using anything that isn’t in cache. Data locality: try to access data that are near each other in memory before data that are far. I/O efficiency: do a bunch of I/O all at once rather than a little bit at a time; don’t mix calculations and I/O.

Parallelism, Part I: Instruction-Level Parallelism DON’T PANIC!

OU Supercomputing Center for Education & Research23 Kinds of ILP Superscalar: perform multiple operations at the same time Pipeline: start performing an operation on one piece of data while finishing the same operation on another piece of data Superpipeline: perform multiple pipelined operations at the same time Vector: load a bunch of pieces of data into registers and operate on all of them

OU Supercomputing Center for Education & Research24 What’s an Instruction? Fetch a value from a specific address in main memory into a specific register Store a value from a specific register into a specific address in main memory Add (subtract, multiply, divide, square root, etc) two specific registers together and put the sum in a specific register Determine whether two registers both contain nonzero values (“AND”) … and so on

OU Supercomputing Center for Education & Research25 DON’T PANIC!

OU Supercomputing Center for Education & Research26 Scalar Operation 1. Load a into register R0 2. Load b into R1 3. Multiply R2 = R0 * R1 4. Load c into R3 5. Load d into R4 6. Multiply R5 = R3 * R4 7. Add R6 = R2 + R5 8. Store R6 into z z = a * b + c * d How would this statement be executed?

OU Supercomputing Center for Education & Research27 Does Order Matter? 1. Load a into R0 2. Load b into R1 3. Multiply R2 = R0 * R1 4. Load c into R3 5. Load d into R4 6. Multiply R5 = R3 * R4 7. Add R6 = R2 + R5 8. Store R6 into z z = a * b + c * d 1. Load d into R4 2. Load c into R3 3. Multiply R5 = R3 * R4 4. Load a into R0 5. Load b into R1 6. Multiply R2 = R0 * R1 7. Add R6 = R2 + R5 8. Store R6 into z In the cases where order doesn’t matter, we say that the operations are independent of one another.

OU Supercomputing Center for Education & Research28 Superscalar Operation 1. Load a into R0 AND load b into R1 2. Multiply R2 = R0 * R1 AND load c into R3 AND load d into R4 3. Multiply R5 = R3 * R4 4. Add R6 = R2 + R5 5. Store R6 into z z = a * b + c * d So, we go from 8 operations down to 5.

OU Supercomputing Center for Education & Research29 Superscalar Loops DO i = 1, n z(i) = a(i)*b(i) + c(i)*d(i) END DO !! i = 1, n Each of the iterations is completely independent of all of the other iterations; e.g., z(1) = a(1)*b(1) + c(1)*d(1) has nothing to do with z(2) = a(2)*b(2) + c(2)*d(2) Operations that are independent of each other can be performed in parallel.

OU Supercomputing Center for Education & Research30 Superscalar Loops for (i = 0; i < n; i++) { z[i] = a[i]*b[i] + c[i]*d[i]; } /* for i */ 1.Load a[i] into R0 AND load b[i] into R1 2.Multiply R2 = R0 * R1 AND load c[i] into R3 AND load d[i] into R4 3.Multiply R5 = R3 * R4 AND load a[i+1] into R0 AND load b[i+1] into R1 4.Add R6 = R2 + R5 AND load c[i+1] into R3 AND load d[i+1] into R4 5.Store R6 into z[i] AND multiply R2 = R0 * R1 6.etc etc etc

OU Supercomputing Center for Education & Research31 Example: Sun UltraSPARC-III 4-way Superscalar: can execute up to 4 operations at the same time [10] 2 integer, memory and/or branch Up to 2 arithmetic or logical operations, and/or 1 memory access (load or store), and/or 1 branch 2 floating point (e.g., add, multiply)

OU Supercomputing Center for Education & Research32 DON’T PANIC!

OU Supercomputing Center for Education & Research33 Pipelining Pipelining is like an assembly line or a bucket brigade. An operation consists of multiple stages. After a set of operands complete a particular stage, they move into the next stage. Then, another set of operands can move into the stage that was just abandoned.

OU Supercomputing Center for Education & Research34 Pipelining Example Instruction Fetch Instruction Decode Operand Fetch Instruction Execution Result Writeback Instruction Fetch Instruction Decode Operand Fetch Instruction Execution Result Writeback Instruction Fetch Instruction Decode Operand Fetch Instruction Execution Result Writeback Instruction Fetch Instruction Decode Operand Fetch Instruction Execution Result Writeback i = 1 i = 2 i = 3 i = 4 Computation time If each stage takes, say, one CPU cycle, then once the loop gets going, each iteration of the loop only increases the total time by one cycle. So a loop of length 1000 takes only 1004 cycles. [11] t = 0t = 1 t = 2 t = 3t = 4 t = 5 t = 6t = 7

OU Supercomputing Center for Education & Research35 Multiply Is Better Than Divide In most (maybe all) CPU types, adds and subtracts execute very quickly. So do multiplies. But divides take much longer to execute, typically 5 to 10 times longer than multiplies. More complicated operations, like square root, exponentials, trigonometric functions and so on, take even longer. Also, on some CPU types, divides and other complicated operations aren’t pipelined.

OU Supercomputing Center for Education & Research36 Superpipelining Superpipelining is a combination of superscalar and pipelining. So, a superpipeline is a collection of multiple pipelines that can operate simultaneously. In other words, several different operations can execute simultaneously, and each of these operations can be broken into stages, each of which is filled all the time. So you can get multiple operations per CPU cycle. For example, a Compaq Alpha can have up to 80 different operations running at the same time. [12]

OU Supercomputing Center for Education & Research37 DON’T PANIC!

OU Supercomputing Center for Education & Research38 Why You Shouldn’t Panic In general, the compiler and the CPU will do most of the heavy lifting for instruction-level parallelism. BUT: You need to be aware of ILP, because how your code is structured affects how much ILP the compiler and the CPU can give you.

Parallelism, Part II: Multiprocessing

OU Supercomputing Center for Education & Research40 The Jigsaw Puzzle Analogy

OU Supercomputing Center for Education & Research41 Serial Computing Suppose I want to do a jigsaw puzzle that has, say, a thousand pieces. We can imagine that it’ll take me a certain amount of time. Let’s say that I can put the puzzle together in an hour.

OU Supercomputing Center for Education & Research42 Shared Memory Parallelism If Dan sits across the table from me, then he can work on his half of the puzzle and I can work on mine. Once in a while, we’ll both reach into the pile of pieces at the same time (we’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time we’ll have to work together (communicate) at the interface between his half and mine. But the speedup will be nearly 2-to-1: we might take 35 minutes instead of 30.

OU Supercomputing Center for Education & Research43 The More the Merrier? Now let’s put Loretta and May on the other two sides of the table. We can each work on a piece of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So we’ll get noticeably less than a 4-to-1 speedup, but we’ll still have an improvement, maybe something like 3-to-1: we can get it done in 20 minutes instead of an hour.

OU Supercomputing Center for Education & Research44 Diminishing Returns If we now put Courtney and Isabella and James and Nilesh on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup we get will be much less than we’d like; we’ll be lucky to get 5-to-1. So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.

OU Supercomputing Center for Education & Research45 Distributed Parallelism Now let’s try something a little different. Let’s set up two tables, and let’s put me at one of them and Dan at the other. Let’s put half of the puzzle pieces on my table and the other hand of the pieces on Dan’s. Now we can work completely independently, without any contention for the shared resource. BUT, the cost of communicating is MUCH higher (we have to get up from our tables and meet), and we need the ability to split up the puzzle pieces reasonably easily (domain decomposition), which may be hard for some puzzles.

OU Supercomputing Center for Education & Research46 More Distributed Processors It’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate between the processors. Also, as you add more processors, it may be harder to load balance the amount of work that each processor gets.

OU Supercomputing Center for Education & Research47 Hybrid Parallelism

OU Supercomputing Center for Education & Research48 Hybrid Parallelism is Good Some HPC platforms don’t support shared memory parallelism, or not very well. When you run on those machines, you can turn the code’s shared memory parallelism system off. Some HPC platforms don’t support distributed parallelism, or not very well. When you run on those machines, you can turn the code’s distributed parallelism system off. Some support both kinds well. So, when you want to use the newest, fastest supercomputer, you can target what it does well without having to rewrite your code.

Why Bother?

OU Supercomputing Center for Education & Research50 Why Bother with HPC at All? It’s clear that making effective use of HPC takes quite a bit of effort, both learning and programming. That seems like a lot of trouble to go to just to get your code to run faster. It’s nice to have a code that used to take a day run in an hour. But if you can afford to wait a day, what’s the point of HPC? Why go to all that trouble just to get your code to run faster?

OU Supercomputing Center for Education & Research51 Why HPC is Worth the Bother What HPC gives you that you won’t get elsewhere is the ability to do bigger, better, more exciting science. That is, if your code can run faster, that means that you can tackle much bigger problems in the same amount of time that you used to do the smaller problem.

OU Supercomputing Center for Education & Research52 Your Fantasy Problem 1. Get together with your research group. 2. Imagine that you had an infinitely large, infinitely fast computer. 3. What problems would you run on it? For those of you with current research projects:

OU Supercomputing Center for Education & Research53 References [1] model_inspn_2_inspn_4000.htm [2] [3] [4] [5] MK2018gas-Over.shtml [6] [7] ftp://download.intel.com/design/Pentium4/manuals/ pdf [8] customer_id=19&keycode=6V944&view=1&order_code=40WX [9] [10] Ruud van der Pas, “The UltraSPARC-III Microprocessor: Architecture Overview,” 2001, p. 23. [11] Kevin Dowd and Charles Severance, High Performance Computing, 2 nd ed. O’Reilly, 1998, p. 16. [12] “ Alpha Processor” (internal Compaq report), page 2.