Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright 2007, University of Alberta Introduction to High Performance Computing Jon Johansson Academic ICT University of Alberta.

Similar presentations


Presentation on theme: "Copyright 2007, University of Alberta Introduction to High Performance Computing Jon Johansson Academic ICT University of Alberta."— Presentation transcript:

1 Copyright 2007, University of Alberta Introduction to High Performance Computing Jon Johansson Academic ICT University of Alberta

2 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

3 Copyright 2007, University of Alberta High Performance Computing HPC is the field that concentrates on developing supercomputers and software to run on supercomputers a main area of this discipline is developing parallel processing algorithms and software programs that can be divided into little pieces so that each piece can be executed simultaneously by separate processors

4 Copyright 2007, University of Alberta High Performance Computing HPC is about “big problems”, i.e. need: lots of memory many cpu cycles big hard drives no matter what field you work in, perhaps your research would benefit by making problems “larger” 2d → 3d finer mesh increase number of elements in the simulation

5 Copyright 2007, University of Alberta Grand Challenges weather forecasting economic modeling computer-aided design drug design exploring the origins of the universe searching for extra-terrestrial life computer vision nuclear power and weapons simulations

6 Copyright 2007, University of Alberta Grand Challenges – Protein To simulate the folding of a 300 amino acid protein in water: # of atoms: ~ 32,000 folding time: 1 millisecond # of FLOPs: 3  10 22 Machine Speed: 1 PetaFLOP/s Simulation Time: 1 year (Source: IBM Blue Gene Project) IBM’s answer: The Blue Gene Project US$ 100 M of funding to build a 1 PetaFLOP/s computer Ken Dil and Kit Lau’s protein folding model. Charles L Brooks III, Scripps Research Institute

7 Copyright 2007, University of Alberta Grand Challenges - Nuclear National Nuclear Security Administration http://www.nnsa.doe.gov/ use supercomputers to run three-dimensional codes to simulate instead of test address critical problems of materials aging simulate the environment of the weapon and try to gauge whether the device continues to be usable stockpile science, molecular dynamics and turbulence calculations http://archive.greenpeace.org/comms/nukes/fig05.gif

8 Copyright 2007, University of Alberta Grand Challenges - Nuclear March 7, 2002: first full- system three-dimensional simulations of a nuclear weapon explosion simulation used more than 480 million cells (grid: 780x780x780) if the grid is a cube 1,920 processors on IBM ASCI White at the Lawrence Livermore National laboratory 2,931 wall-clock hours or 122.5 days 6.6 million CPU hours ASCI White Test shot “Badger” Nevada Test Site – Apr. 1953 Yield: 23 kilotons http://nuclearweaponarchive.org/Usa/Tests/Upshotk.html

9 Copyright 2007, University of Alberta Grand Challenges - Nuclear Advanced Simulation and Computing Program (ASC) http://www.llnl.gov/asc/asc_history/asci_mission.html

10 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

11 Copyright 2007, University of Alberta What is a “Mainframe”? large and reasonably fast machines the speed isn't the most important characteristic high-quality internal engineering and resulting proven reliability expensive but high-quality technical support top-notch security strict backward compatibility for older software

12 Copyright 2007, University of Alberta What is a “Mainframe”? these machines can, and do, run successfully for years without interruption (long uptimes) repairs can take place while the mainframe continues to run the machines are robust and dependable IBM coined a term advertise the robustness of their mainframe computers : Reliability, Availability and Serviceability (RAS)

13 Copyright 2007, University of Alberta What is a “Mainframe”? Introducing IBM System z9 109 Designed for the On Demand Business IBM is delivering a holistic approach to systems design Designed and optimized with a total systems approach Helps keep your applications running with enhanced protection against planned and unplanned outages Extended security capabilities for even greater protection capabilities Increased capacity with more available engines per server

14 Copyright 2007, University of Alberta What is a Supercomputer?? at any point in time the term “Supercomputer” refers to the fastest machines currently available a supercomputer this year might be a mainframe in a couple of years a supercomputer is typically used for scientific and engineering applications that must do a great amount of computation

15 Copyright 2007, University of Alberta What is a Supercomputer?? the most significant difference between a supercomputer and a mainframe: a supercomputer channels all its power into executing a few programs as fast as possible if the system crashes, restart the job(s) – no great harm done a mainframe uses its power to execute many programs simultaneously e.g. – a banking system must run reliably for extended periods

16 Copyright 2007, University of Alberta What is a Supercomputer?? to see the worlds “fastest” computers look at http://www.top500.org/ measure performance with the Linpack benchmark http://www.top500.org/lists/linpack.php solve a dense system of linear equations the performance numbers give a good indication of peak performance

17 Copyright 2007, University of Alberta What is a Supercomputer?? count the number of “floating point operations” required to solve the problem + - x / results of the benchmark are so many Floating point Operations Per Second (FLOPS) a supercomputer is a machine that can provide a very large number of FLOPS

18 Copyright 2007, University of Alberta Floating Point Operations multiply 2 1000x1000 matrices for each resulting array element 1000 multiplies 999 adds do this 1,000,000 times ~10 9 operations needed increasing array size has the number of operations increasing as O(N 3 )

19 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

20 Copyright 2007, University of Alberta High Performance Computing supercomputers use many CPUs to do the work note that all supercomputing architectures have processors and some combination cache some form of memory and IO the processors are separated from the other processors by some distance there are major differences in the way that the parts are connected some problems fit into different architectures better than others

21 Copyright 2007, University of Alberta High Performance Computing increasing computing power available to researchers allows increasing problem dimensions adding more particles to a system increasing the accuracy of the result improving experiment turnaround time

22 Copyright 2007, University of Alberta Flynn’s Taxonomy Michael J. Flynn (1972) classified computer architectures based on the number of concurrent instructions and data streams available single instruction, single data (SISD) – basic old PC multiple instruction, single data (MISD) – redundant systems single instruction, multiple data (SIMD) – vector (or array) processor multiple instruction, multiple data (MIMD) – shared or distributed memory systems: symmetric multiprocessors and clusters common extension: single program (or process), multiple data (SPMD)

23 Copyright 2007, University of Alberta Architectures we can also classify supercomputers according to how the processors and memory are connected couple processors to a single large memory address space couple computers, each with its own memory address space

24 Copyright 2007, University of Alberta Architectures Symmetric Multiprocessing (SMP) Uniform Memory Access (UMA) multiple CPUs, residing in one cabinet, share the same memory processors and memory are tightly coupled the processors share memory and the I/O bus or data path

25 Copyright 2007, University of Alberta Architectures SMP a single copy of the operating system is in charge of all the processors SMP systems range from two to as many as 32 or more processors

26 Copyright 2007, University of Alberta Architectures SMP "capability computing" one CPU can use all the memory all the CPUs can work on a little memory whatever you need

27 Copyright 2007, University of Alberta Architectures UMA-SMP negatives as the number of CPUs get large the buses become saturated long wires cause latency problems

28 Copyright 2007, University of Alberta Architectures Non-Uniform Memory Access (NUMA) NUMA is similar to SMP - multiple CPUs share a single memory space hardware support for shared memory memory is separated into close and distant banks basically a cluster of SMPs memory on the same processor board as the CPU (local memory) is accessed faster than memory on other processor boards (shared memory) hence "non-uniform" NUMA architecture scales much better to higher numbers of CPUs than SMP

29 Copyright 2007, University of Alberta Architectures

30 Copyright 2007, University of Alberta Architectures University of Alberta SGI OriginSGI NUMA cables

31 Copyright 2007, University of Alberta Architectures Cache Coherent NUMA (ccNUMA) each CPU has an associated cache ccNUMA machines use special-purpose hardware to maintain cache coherence typically done by using inter-processor communication between cache controllers to keep a consistent memory image when the same memory location is stored in more than one cache ccNUMA performs poorly when multiple processors attempt to access the same memory area in rapid succession

32 Copyright 2007, University of Alberta Architectures Distributed Memory Multiprocessor (DMMP) each computer has its own memory address space looks like NUMA but there is no hardware support for remote memory access the special purpose switched network is replaced by a general purpose network such as Ethernet or more specialized interconnects: Infiniband Myrinet Lattice: Calgary’s HP ES40 and ES45 cluster – each node has 4 processors

33 Copyright 2007, University of Alberta Architectures Massively Parallel Processing (MPP) Cluster of commodity PCs processors and memory are loosely coupled "capacity computing" each CPU contains its own memory and copy of the operating system and application. each subsystem communicates with the others via a high- speed interconnect. in order to use MPP effectively, a problem must be breakable into pieces that can all be solved simultaneously

34 Copyright 2007, University of Alberta Architectures

35 Copyright 2007, University of Alberta Architectures lots of “how to build a cluster” tutorials on the web – just Google: http://www.beowulf.org/ http://www.cacr.caltech.edu/beowulf/tutorial/b uilding.htmlhttp://www.cacr.caltech.edu/beowulf/tutorial/b uilding.html

36 Copyright 2007, University of Alberta Architectures Vector Processor or Array Processor a CPU design that is able to run mathematical operations on multiple data elements simultaneously a scalar processor operates on data elements one at a time vector processors formed the basis of most supercomputers through the 1980s and into the 1990s “pipeline” the data

37 Copyright 2007, University of Alberta Architectures Vector Processor or Array Processor operate on many pieces of data simultaneously consider the following add instruction: C = A + B on both scalar and vector machines this means: add the contents of A to the contents of B and put the sum in C' on a scalar machine the operands are numbers on a vector machine the operands are vectors and the instruction directs the machine to compute the pair-wise sum of each pair of vector elements

38 Copyright 2007, University of Alberta Architectures University of Victoria has 4 NEC SX-6/8A vector processors in the School of Earth and Ocean Sciences each has 32 GB of RAM 8 vector processors in the box peak performance is 72 GFLOPS

39 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

40 Copyright 2007, University of Alberta BlueGene/L The fastest on the 26th (Nov. 2006) top 500 list: http://www.top500.org/ installed at the Lawrence Livermore National Laboratory (LLNL) (US Department of Energy) Livermore California

41 Copyright 2007, University of Alberta http://www.llnl.gov/asc/platforms/bluegenel/photogallery.html

42 Copyright 2007, University of Alberta BlueGene/L processors: 131072 memory: 32 TB 64 racks – each has 2048 processors and 512 GB of RAM (256 MB/processor) a Linpack performance of 280.6 TFlop/s in Nov 2005 it was the only system ever to exceed the 100 TFlop/s mark there are now 2 machines over 100 TFlop/s

43 Copyright 2007, University of Alberta SiteComputerProcessorsYearR max R peak DOE/NNSA/LLNL United States BlueGene/L - eServer Blue Gene Solution IBM 1310722005280600367000 NNSA/Sandia National Laboratories United States Red Storm - Sandia/ Cray Red Storm, Opteron 2.4 GHz dual core Cray Inc. 265442006101400127411 IBM Thomas J. Watson Research Center United States BGW - eServer Blue Gene Solution IBM 40960200591290114688 DOE/NNSA/LLNL United States ASC Purple - eServer pSeries p5 575 1.9 GHz IBM 1220820067576092781 Barcelona Supercomputing Center Spain MareNostrum - BladeCenter JS21 Cluster, PPC 970, 2.3 GHz, Myrinet IBM 1024020066263094208 NNSA/Sandia National Laboratories United States Thunderbird - PowerEdge 1850, 3.6 GHz, Infiniband Dell 902420065300064972.8 Commissariat a l'Energie Atomique (CEA) France Tera-10 - NovaScale 5160, Itanium2 1.6 GHz, Quadrics Bull SA 996820065284063795.2 NASA/Ames Research Center/NAS United States Columbia - SGI Altix 1.5 GHz, Voltaire Infiniband SGI 1016020045187060960 The Fastest Eight

44 Copyright 2007, University of Alberta Future BlueGene

45 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

46 Copyright 2007, University of Alberta Speedup how can we measure how much faster our program runs when using more than one processor? define Speedup S as: the ratio of 2 program execution times constant problem size T 1 is the execution time for the problem on a single processor (use the “best” serial time) T P is the execution time for the problem on P processors

47 Copyright 2007, University of Alberta Speedup Linear speedup the time to execute the problem decreases by the number of processors if a job requires 1 week with 1 processor it will take less that 10 minutes with 1024 processors

48 Copyright 2007, University of Alberta Speedup Sublinear speedup the usual case there are generally some limitations to the amount of speedup that you get communication

49 Copyright 2007, University of Alberta Speedup Superlinear speedup very rare memory access patterns may allow this for some algorithms

50 Copyright 2007, University of Alberta Speedup why do a speedup test? it’s hard to tell how a program will behave e.g. “Strange” is actually fairly common behaviour for un- tuned code in this case: linear speedup to ~10 cpus after 24 cpus speedup is starting to decrease

51 Copyright 2007, University of Alberta Speedup to use more processors efficiently change this behaviour change loop structure adjust algorithms ?? run jobs with 10-20 processors so the machines are used efficiently

52 Copyright 2007, University of Alberta Speedup one class of jobs that have linear speed up are called “embarrassingly parallel” a better name might be “perfectly” parallel doesn’t take much effort to turn the problem into a bunch of parts that can be run in parallel: parameter searches rendering the frames in a computer animation brute force searches in cryptography

53 Copyright 2007, University of Alberta Amdahl’s Law Gene Amdahl: 1967 parallelize some of the program – some must remain serial f is the fraction of the calculation that is serial 1- f is the fraction of the calculation that is parallel the maximum speedup that can be obtained by using P processors is: f 1- f serial parallel

54 Copyright 2007, University of Alberta Amdahl’s Law if 25% of the calculation must remain serial the best speedup you can obtain is 4 need to parallelize as much of the program as possible to get the best advantage from multiple processors

55 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

56 Copyright 2007, University of Alberta Parallel Programming need to do something to your program to use multiple processors need to incorporate commands into your program which allow multiple threads to run one thread per processor each thread gets a piece of the work several ways (APIs) to do this …

57 Copyright 2007, University of Alberta Parallel Programming OpenMP introduce statements into your code in C: #pragma in FORTRAN: C$OMP or !$OMP can compile serial and parallel executables from the same source code restricted to shared memory machines not clusters! www.openmp.org

58 Copyright 2007, University of Alberta Parallel Programming OpenMP demo: MatCrunch mathematical operations on the elements of an array introduce 2 OMP directives before a loop # pragma omp parallel // define a parallel section # pragma omp for // loop is to be parallel serial section:4.03 sec parallel section – 1 cpu: 40.27 secs parallel section – 2 cpu: 20.25 secs speedup = 1.99// not bad for adding 2 lines

59 Copyright 2007, University of Alberta Parallel Programming for a larger number of processors the speedup for MatCrunch is not linear need to do the speedup test to see how your program will behave

60 Copyright 2007, University of Alberta Parallel Programming MPI (Message Passing Interface) a standard set of communication subroutine libraries works for SMPs and clusters programs written with MPI are highly portable information and downloads http://www.mpi-forum.org/ MPICH: http://www-unix.mcs.anl.gov/mpi/mpich/http://www-unix.mcs.anl.gov/mpi/mpich/ LAM/MPI: http://www.lam-mpi.org/http://www.lam-mpi.org/ Open MPI: http://www.open-mpi.org/http://www.open-mpi.org/

61 Copyright 2007, University of Alberta Parallel Programming MPI (Message Passing Interface) supports the SPMD, single program multiple data model all processors use the same program each processor has its own data think of a cluster – each node is getting a copy of the program but running a specific portion of it with its own data

62 Copyright 2007, University of Alberta Parallel Programming it’s possible to combine OpenMP and MPI for running on clusters of SMP machines the trick in parallel programming is to keep all the processors working (“load balancing”) working on data that no other processor needs to touch (there aren’t any cache conflicts)

63 Copyright 2007, University of Alberta Agenda What is High Performance Computing? What is a “supercomputer”? is it a mainframe? Supercomputer architectures Who has the fastest computers? Speedup Programming for parallel computing The GRID??

64 Copyright 2007, University of Alberta Grid Computing A computational grid: is a large-scale distributed computing infrastructure composed of geographically distributed, autonomous resource providers lots of computers joined together requires excellent networking that supports resource sharing and distribution offers access to all the resources that are part of the grid compute cycles storage capacity visualization/collaboration is intended for integrated and collaborative use by multiple organizations

65 Copyright 2007, University of Alberta Grids Ian Foster (the “Father of the Grid”) says that to be a Grid three points must be met computing resources are not administered centrally many sites connected open standards are used not a proprietary system non-trivial quality of service is achieved it is available most of the time CERN says a Grid is “a service for sharing computer power and data storage capacity over the Internet”

66 Copyright 2007, University of Alberta Canadian Academic Computing Sites in 2000

67 Copyright 2007, University of Alberta Canadian Grids Some sites in Canada have tied their resources together to form 7 Canadian Grid Consortia: ACENET Atlantic Computational Excellence Network CLUMEQ Consortium Laval UQAM McGill and Eastern Quebec for High Performance Computing SCINET University of Toronto HPCVL High Performance Computing Virtual Laboratory RQCHP Reseau Quebecois de calcul de haute performance SHARCNET Shared Hierarchical Academic Research Computing Network WESTGRID Alberta, British Columbia Some sites in Canada have tied their resources together to form 7 Canadian Grid Consortia: ACENET Atlantic Computational Excellence Network CLUMEQ Consortium Laval UQAM McGill and Eastern Quebec for High Performance Computing SCINET University of Toronto HPCVL High Performance Computing Virtual Laboratory RQCHP Reseau Quebecois de calcul de haute performance SHARCNET Shared Hierarchical Academic Research Computing Network WESTGRID Alberta, British Columbia

68 Copyright 2007, University of Alberta WestGrid Edmonton Calgary UBC Campus SFU Campus

69 Copyright 2007, University of Alberta Grids the ultimate goal of the Grid idea is to have a system that you can submit a job to, so that: your job uses resources that fit requirements that you specify  128 nodes on an SMP  200 GB of RAM or  256 nodes on a PC cluster  1 GB/processor when done the results come back to you you don’t care where the job runs Vancouver or St. John’s or in between

70 Copyright 2007, University of Alberta Sharing Resources HPC resources are not available quite as readily as your desktop computer the resources must be shared fairly the idea is that each person get as much of the resource as necessary to run their job for a “reasonable” time if the job can’t finish in the allotted time the job needs to “checkpoint” save enough information to begin running again from where it left off

71 Copyright 2007, University of Alberta Sharing Resources Portable Batch System (Torque) submit a job to PBS job is placed in a queue with other users’ jobs jobs in the queue are prioritized by a scheduler your job executes at some time in the future An HPC Site

72 Copyright 2007, University of Alberta Sharing Resources When connecting to a Grid we need a layer of “middleware” tools to securely access the resources Globus is one example http://www.globus.org/ A Grid of HPC Sites

73 Copyright 2007, University of Alberta Questions? Many details in other sessions of this workshop!


Download ppt "Copyright 2007, University of Alberta Introduction to High Performance Computing Jon Johansson Academic ICT University of Alberta."

Similar presentations


Ads by Google