Download presentation
Presentation is loading. Please wait.
Published byHarriet Matthews Modified over 9 years ago
1
01/28/2009CS267 Lecture 31 CS 267: Introduction to Parallel Machines and Programming Models James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09
2
01/28/2009 CS267 Lecture 32 Outline Overview of parallel machines (~hardware) and programming models (~software) Shared memory Shared address space Message passing Data parallel Clusters of SMPs Grid Parallel machine may or may not be tightly coupled to programming model Historically, tight coupling Today, portability is important Trends in real machines
3
01/28/2009 CS267 Lecture 33 A generic parallel architecture Proc Interconnection Network Where is the memory physically located? Is it connect directly to processors? What is the connectivity of the network? Memory Proc Memory
4
01/28/2009 CS267 Lecture 34 Parallel Programming Models Programming model is made up of the languages and libraries that create an abstract view of the machine Control How is parallelism created? What orderings exist between operations? Data What data is private vs. shared? How is logically shared data accessed or communicated? Synchronization What operations can be used to coordinate parallelism? What are the atomic (indivisible) operations? Cost How do we account for the cost of each of the above?
5
01/28/2009 CS267 Lecture 35 Simple Example Consider applying a function f to the elements of an array A and then computing its sum: Questions: Where does A live? All in single memory? Partitioned? What work will be done by each processors? They need to coordinate to get a single result, how? A: fA: f sum A = array of all data fA = f(A) s = sum(fA) s:
6
01/28/2009 CS267 Lecture 36 Programming Model 1: Shared Memory Program is a collection of threads of control. Can be created dynamically, mid-execution, in some languages Each thread has a set of private variables, e.g., local stack variables Also a set of shared variables, e.g., static variables, shared common blocks, or global heap. Threads communicate implicitly by writing and reading shared variables. Threads coordinate by synchronizing on shared variables PnP1 P0 s s =... y =..s... Shared memory i: 2i: 5 Private memory i: 8
7
01/28/2009 CS267 Lecture 37 Simple Example Shared memory strategy: small number p << n=size(A) processors attached to single memory Parallel Decomposition: Each evaluation and each partial sum is a task. Assign n/p numbers to each of p procs Each computes independent “private” results and partial sum. Collect the p partial sums and compute a global sum. Two Classes of Data: Logically Shared The original n numbers, the global sum. Logically Private The individual function evaluations. What about the individual partial sums?
8
01/28/2009 CS267 Lecture 38 Shared Memory “Code” for Computing a Sum Thread 1 for i = 0, n/2-1 s = s + f(A[i]) Thread 2 for i = n/2, n-1 s = s + f(A[i]) static int s = 0; Problem is a race condition on variable s in the program A race condition or data race occurs when: -two processors (or two threads) access the same variable, and at least one does a write. -The accesses are concurrent (not synchronized) so they could happen simultaneously
9
01/28/2009 CS267 Lecture 39 Shared Memory “Code” for Computing a Sum Thread 1 …. compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 … Thread 2 … compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 … static int s = 0; Assume A = [3,5], f(x) = x 2, and s=0 initially For this program to work, s should be 3 2 + 5 2 = 34 at the end but it may be 34,9, or 25 The atomic operations are reads and writes Never see ½ of one number, but no += operation is not atomic All computations happen in (private) registers 925 0 0 9 9 35 A= f (x) = x 2
10
01/28/2009 CS267 Lecture 310 Improved Code for Computing a Sum Thread 1 local_s1= 0 for i = 0, n/2-1 local_s1 = local_s1 + f(A[i]) s = s + local_s1 Thread 2 local_s2 = 0 for i = n/2, n-1 local_s2= local_s2 + f(A[i]) s = s +local_s2 static int s = 0; Since addition is associative, it’s OK to rearrange order Most computation is on private variables -Sharing frequency is also reduced, which might improve speed -But there is still a race condition on the update of shared s -The race condition can be fixed by adding locks (only one thread can hold a lock at a time; others wait for it) static lock lk; lock(lk); unlock(lk); lock(lk); unlock(lk);
11
01/28/2009 CS267 Lecture 311 Machine Model 1a: Shared Memory P1 bus $ memory Processors all connected to a large shared memory. Typically called Symmetric Multiprocessors (SMPs) SGI, Sun, HP, Intel, IBM SMPs (nodes of Millennium, SP) Multicore chips, except that all caches are shared Difficulty scaling to large numbers of processors <= 32 processors typical Advantage: uniform memory access (UMA) Cost: much cheaper to access data in cache than main memory. P2 $ Pn $ Note: $ = cache shared $
12
01/28/2009 CS267 Lecture 312 Problems Scaling Shared Memory Hardware Why not put more processors on (with larger memory?) The memory bus becomes a bottleneck Caches need to be kept coherent Example from a Parallel Spectral Transform Shallow Water Model (PSTSWM) demonstrates the problem Experimental results (and slide) from Pat Worley at ORNL This is an important kernel in atmospheric models 99% of the floating point operations are multiplies or adds, which generally run well on all processors But it does sweeps through memory with little reuse of operands, so uses bus and shared memory frequently These experiments show serial performance, with one “copy” of the code running independently on varying numbers of procs The best case for shared memory: no sharing But the data doesn’t all fit in the registers/cache
13
01/28/2009 CS267 Lecture 313 From Pat Worley, ORNL Example: Problem in Scaling Shared Memory Performance degradation is a “smooth” function of the number of processes. No shared data between them, so there should be perfect parallelism. (Code was run for a 18 vertical levels with a range of horizontal sizes.)
14
01/28/2009 CS267 Lecture 314 Machine Model 1b: Multithreaded Processor Multiple thread “contexts” without full processors Memory and some other state is shared Sun Niagra processor (for servers) Up to 64 threads all running simultaneously (8 threads x 8 cores) In addition to sharing memory, they share floating point units Why? Switch between threads for long-latency memory operations Cray MTA and Eldorado processors (for HPC) Memory shared $, shared floating point units, etc. T0 T1Tn
15
01/28/2009 CS267 Lecture 315 Eldorado Processor (logical view) Source: John Feo, Cray
16
01/28/2009 CS267 Lecture 316 Machine Model 1c: Distributed Shared Memory Memory is logically shared, but physically distributed Any processor can access any address in memory Cache lines (or pages) are passed around machine SGI Origin is canonical example (+ research machines) Scales to 512 (SGI Altix (Columbia) at NASA/Ames) Limitation is cache coherency protocols – how to keep cached copies of the same address consistent P1 network $ memory P2 $ Pn $ memory Cache lines (pages) must be large to amortize overhead locality still critical to performance
17
01/28/2009 CS267 Lecture 317 Programming Model 2: Message Passing Program consists of a collection of named processes. Usually fixed at program startup time Thread of control plus local address space -- NO shared data. Logically shared data is partitioned over local processes. Processes communicate by explicit send/receive pairs Coordination is implicit in every communication event. MPI (Message Passing Interface) is the most commonly used SW PnP1 P0 y =..s... s: 12 i: 2 Private memory s: 14 i: 3 s: 11 i: 1 send P1,s Network receive Pn,s
18
01/28/2009 CS267 Lecture 318 Computing s = A[1]+A[2] on each processor ° First possible solution – what could go wrong? Processor 1 xlocal = A[1] send xlocal, proc2 receive xremote, proc2 s = xlocal + xremote Processor 2 xlocal = A[2] receive xremote, proc1 send xlocal, proc1 s = xlocal + xremote ° Second possible solution Processor 1 xlocal = A[1] send xlocal, proc2 receive xremote, proc2 s = xlocal + xremote Processor 2 xlocal = A[2] send xlocal, proc1 receive xremote, proc1 s = xlocal + xremote ° If send/receive acts like the telephone system? The post office? ° What if there are more than 2 processors?
19
01/28/2009 CS267 Lecture 319 MPI has become the de facto standard for parallel computing using message passing Pros and Cons of standards MPI created finally a standard for applications development in the HPC community portability The MPI standard is a least common denominator building on mid-80s technology, so may discourage innovation Programming Model reflects hardware! “I am not sure how I will program a Petaflops computer, but I am sure that I will need MPI somewhere” – HDS 2001 MPI – the de facto standard
20
01/28/2009 CS267 Lecture 320 Machine Model 2a: Distributed Memory Cray T3E, IBM SP2 PC Clusters (Berkeley NOW, Beowulf) IBM SP-3, Millennium, CITRIS are distributed memory machines, but the nodes are SMPs. Each processor has its own memory and cache but cannot directly access another processor’s memory. Each “node” has a Network Interface (NI) for all communication and synchronization. interconnect P0 memory NI... P1 memory NI Pn memory NI
21
01/28/2009 CS267 Lecture 321 PC Clusters: Contributions of Beowulf An experiment in parallel computing systems Established vision of low cost, high end computing Demonstrated effectiveness of PC clusters for some (not all) classes of applications Provided networking software Conveyed findings to broad community (great PR) Tutorials and book Design standard to rally community! Standards beget: books, trained people, software … virtuous cycle Adapted from Gordon Bell, presentation at Salishan 2000
22
01/28/2009 CS267 Lecture 322 Tflop/s and Pflop/s Clusters The following are examples of clusters configured out of separate networks and processor components 82% of Top 500 (Nov 2008, up from 72% in 2005), 3 of top 10 IBM Cell cluster at Los Alamos (Roadrunner) is #1 12,960 Cell chips + 6,948 dual-core AMD Opterons; 129600 cores altogether 1.45 PFlops peak, 1.1PFlops Linpack, 2.5MWatts Infiniband connection network In 2005 Walt Disney Feature Animation (The Hive) was #96 1110 Intel Xeons @ 3 GHz Gigabit Ethernet For more details use “database/sublist generator” at www.top500.org
23
01/28/2009 CS267 Lecture 423 Machine Model 2b: Internet/Grid Computing SETI@Home: Running on 500,000 PCsSETI@Home ~1000 CPU Years per Day 485,821 CPU Years so far Sophisticated Data & Signal Processing Analysis Distributes Datasets from Arecibo Radio Telescope Next Step- Allen Telescope Array
24
01/28/2009 CS267 Lecture 324 Programming Model 2a: Global Address Space Program consists of a collection of named threads. Usually fixed at program startup time Local and shared data, as in shared memory model But, shared data is partitioned over local processes Cost models says remote data is expensive Examples: UPC, Titanium, Co-Array Fortran Global Address Space programming is an intermediate point between message passing and shared memory PnP1 P0 s[myThread] =... y =..s[i]... i: 1i: 5 Private memory Shared memory i: 8 s[0]: 26s[1]: 32 s[n]: 27
25
01/28/2009 CS267 Lecture 325 Machine Model 2c: Global Address Space Cray T3D, T3E, X1, and HP Alphaserver cluster Clusters built with Quadrics, Myrinet, or Infiniband The network interface supports RDMA (Remote Direct Memory Access) NI can directly access memory without interrupting the CPU One processor can read/write memory with one-sided operations (put/get) Not just a load/store as on a shared memory machine Continue computing while waiting for memory op to finish Remote data is typically not cached locally interconnect P0 memory NI... P1 memory NI Pn memory NI Global address space may be supported in varying degrees
26
01/28/2009 CS267 Lecture 326 Programming Model 3: Data Parallel Single thread of control consisting of parallel operations. Parallel operations applied to all (or a defined subset) of a data structure, usually an array Communication is implicit in parallel operators Elegant and easy to understand and reason about Coordination is implicit – statements executed synchronously Similar to Matlab language for array operations Drawbacks: Not all problems fit this model Difficult to map onto coarse-grained machines A: fA: f sum A = array of all data fA = f(A) s = sum(fA) s:
27
01/28/2009 CS267 Lecture 327 Machine Model 3a: SIMD System A large number of (usually) small processors. A single “control processor” issues each instruction. Each processor executes the same instruction. Some processors may be turned off on some instructions. Originally machines were specialized to scientific computing, few made (CM2, Maspar) Programming model can be implemented in the compiler mapping n-fold parallelism to p processors, n >> p, but it’s hard (e.g., HPF) interconnect P1 memory NI... control processor P1 memory NIP1 memory NI P1 memory NI P1 memory NI
28
01/28/2009 CS267 Lecture 328 Machine Model 3b: Vector Machines Vector architectures are based on a single processor Multiple functional units All performing the same operation Instructions may specific large amounts of parallelism (e.g., 64- way) but hardware executes only a subset in parallel Historically important Overtaken by MPPs in the 90s Re-emerging in recent years At a large scale in the Earth Simulator (NEC SX6) and Cray X1 At a small scale in SIMD media extensions to microprocessors SSE, SSE2 (Intel: Pentium/IA64) Altivec (IBM/Motorola/Apple: PowerPC) VIS (Sun: Sparc) At a larger scale in GPUs Key idea: Compiler does some of the difficult work of finding parallelism, so the hardware doesn’t have to
29
01/28/2009 CS267 Lecture 329 Vector Processors Vector instructions operate on a vector of elements These are specified as operations on vector registers A supercomputer vector register holds ~32-64 elts The number of elements is larger than the amount of parallel hardware, called vector pipes or lanes, say 2-4 The hardware performs a full vector operation in #elements-per-vector-register / #pipes r1r2 r3 + + … vr2 … vr1 … vr3 (logically, performs # elts adds in parallel) … vr2 … vr1 (actually, performs #pipes adds in parallel) ++++++
30
01/28/2009 CS267 Lecture 330 12.8 Gflops (64 bit) S VV S VV S VV S VV 0.5 MB $ 0.5 MB $ 0.5 MB $ 0.5 MB $ 25.6 Gflops (32 bit) To local memory and network: 2 MB Ecache At frequency of 400/800 MHz 51 GB/s 25-41 GB/s 25.6 GB/s 12.8 - 20.5 GB/s custom blocks Cray X1 Node Figure source J. Levesque, Cray Cray X1 builds a larger “virtual vector”, called an MSP 4 SSPs (each a 2-pipe vector processor) make up an MSP Compiler will (try to) vectorize/parallelize across the MSP
31
01/28/2009 CS267 Lecture 331 Cray X1: Parallel Vector Architecture Cray combines several technologies in the X1 12.8 Gflop/s Vector processors (MSP) Shared caches (unusual on earlier vector machines) 4 processor nodes sharing up to 64 GB of memory Single System Image to 4096 Processors Remote put/get between nodes (faster than MPI)
32
01/28/2009 CS267 Lecture 332 Earth Simulator Architecture Parallel Vector Architecture High speed (vector) processors High memory bandwidth (vector architecture) Fast network (new crossbar switch) Rearranging commodity parts can’t match this performance
33
01/28/2009 CS267 Lecture 333 Programming Model 4: Hybrids These programming models can be mixed Message passing (MPI) at the top level with shared memory within a node is common New DARPA HPCS languages mix data parallel and threads in a global address space Global address space models can (often) call message passing libraries or vice verse Global address space models can be used in a hybrid mode Shared memory when it exists in hardware Communication (done by the runtime system) otherwise For better or worse….
34
01/28/2009 CS267 Lecture 334 Machine Model 4: Clusters of SMPs SMPs are the fastest commodity machine, so use them as a building block for a larger machine with a network Common names: CLUMP = Cluster of SMPs Hierarchical machines, constellations Many modern machines look like this: Millennium, IBM SPs, ASCI machines What is an appropriate programming model #4 ??? Treat machine as “flat”, always use message passing, even within SMP (simple, but ignores an important part of memory hierarchy). Shared memory within one SMP, but message passing outside of an SMP.
35
01/28/2009 CS267 Lecture 335 Outline Overview of parallel machines and programming models Shared memory Shared address space Message passing Data parallel Clusters of SMPs Trends in real machines (www.top500.org)
36
01/28/2009 CS267 Lecture 336 - Listing of the 500 most powerful Computers in the World - Yardstick: R max from Linpack Ax=b, dense problem - Updated twice a year: ISC‘xy in Germany, June xy SC‘xy in USA, November xy - All data available from www.top500.org Size Rate TPP performance TOP500
37
01/28/2009 EXTRA SLIDES (TOP 500 FROM NOV 2007) CS267 Lecture 337
38
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 38 30th List: The TOP10 ManufacturerComputer Rmax [TF/s] Installation SiteCountryYear#Cores 1IBM BlueGene/L eServer Blue Gene 478.2DOE/NNSA/LLNLUSA2007212,992 2IBM JUGENE BlueGene/P Solution 167.3 Forschungszentrum Juelich Germany200765,536 3SGI SGI Altix ICE 8200 126.9 New Mexico Computing Applications Center USA200714,336 4HP Cluster Platform 3000 BL460c 117.9 Computational Research Laboratories, TATA SONS India200714,240 5HP Cluster Platform 3000 BL460c 102.8 Swedish Government Agency Sweden200713,728 6Sandia/Cray Red Storm Cray XT3 102.2DOE/NNSA/SandiaUSA200626,569 7Cray Jaguar Cray XT3/XT4 101.7DOE/ORNLUSA200723,016 8IBM BGW eServer Blue Gene 91.29IBM Thomas WatsonUSA200540,960 9Cray Franklin Cray XT4 85.37NERSC/LBNLUSA200719,320 10IBM New York Blue eServer Blue Gene 82.16Stony Brook/BNLUSA200736,864
39
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 39 Top500 Status 1 st Top500 List in June 1993 at ISC’93 in Mannheim 29 th Top500 List on June 27, 2007 at ISC’07 in Dresden 30 th Top500 List on November 13, 2007 at SC07 in Reno 31 st Top500 List on June 18, 2008 at ISC’08 in Dresden 32 nd Top500 List on November 18, 2008 at SC08 in Austin 33 rd Top500 List on June 24, 2009 at ISC’09 in Hamburg Acknowledged by HPC-users, manufacturers and media Competition between Manufacturers, Countries and Sites
40
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 40 CountriesCountShare % USA22545.00 % Japan11122.20 % Germany5911.80 % France265.20 % United Kingdom255.00 % Australia91.80 % Italy61.20 % Netherlands61.20 % Switzerland40.80 % Canada30.60 % Denmark30.60 % Korea30.60 % Others204.00 % Totals500100 % 1 st List as of 06/1993 100 %500Totals 11.00 %55Others 1.80 %9India 2.00 %10China 0.20 %1Korea 0.20 %1Denmark 1.00 %5Canada 1.40 %7Switzerland 1.20 %6Netherlands 1.20 %6Italy 0.20 %1Australia 9.60 %48United Kingdom 3.40 %17France 6.20 %31Germany 4.00 %20Japan 56.60%283USA Share %CountCountries 30 th List as of 11/2007 Competition between Manufacturers, Countries and Sites
41
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 41 Competition between Manufacturers, Countries and Sites Countries/Systems
42
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 42 ManufacturersCountShare % Cray Research20541.00 % Fujitsu6913.80 % Thinking Machines5410.80 % Intel448.80 % Convex367.20 % NEC326.40 % Kendall Square Research 214.20 % MasPar183.60 % Meiko91.80 % Hitachi61.20 % Parsytec30.60 % nCube30.60 % Totals500100 % 1 st List as of 06/1993 7.00 %35Others 100 %500Totals 4.80 %24Dell 4.40 %22SGI 46.40 %232IBM ——nCube ——Parsytec 0.20 %1Hitachi/Fujitsu ——Meiko ——MasPar —— Kendall Square Research 0.40 %2NEC 33.20 %166Hewlett-Packard 0.20 %1Intel ——Thinking Machines 0.60 %3Fujitsu 2.80 %14Cray Inc. Share %CountManufacturers 30 th List as of 11/2007 Competition between Manufacturers, Countries and Sites
43
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 43 Competition between Manufacturers, Countries and Sites Manufacturer/Systems
44
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 44 Top Sites through 30 Lists Competition between Manufacturers, Countries and Sites RankSiteCountry% over time 1Lawrence Livermore National LaboratoryUnited States5.39 2Sandia National LaboratoriesUnited States3.70 3Los Alamos National LaboratoryUnited States3.41 4GovernmentUnited States3.34 5The Earth Simulator CenterJapan1.99 6National Aerospace Laboratory of JapanJapan1.70 7Oak Ridge National LaboratoryUnited States1.39 8NCSAUnited States1.31 9NASA/Ames Research Center/NASUnited States1.25 10University of TokyoJapan1.21 11NERSC/LBNLUnited States1.19 12Pittsburgh Supercomputing CenterUnited States1.15 13Semiconductor Company (C)United States1.11 14Naval Oceanographic Office (NAVOCEANO)United States1.08 15ECMWFUnited Kingdom1.02 16ERDC MSRCUnited States0.91 17IBM Thomas J. Watson Research CenterUnited States0.86 18Forschungszentrum Juelich (FZJ)Germany0.84 19Japan Atomic Energy Research InstituteJapan0.83 20Minnesota Supercomputer CenterUnited States0.74
45
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 45 Number of Teraflop/s Systems in the Top500 1 10 100 500 Jun-97Jun-98Jun-99Jun-00Jun-01Jun-02Jun-03Jun-04Jun-05 My Supercomputer Favorite in the Top500 Lists
46
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 46 The 30 th List as of November 2007 Processor Architecture/Systems
47
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 47 The 30 th List as of November 2007 Operating Systems/Systems
48
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 48 The 30 th List as of November 2007 Processor Generation/Systems (November 2007)
49
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 49 The 30 th List as of November 2007 Interconnect Family/Systems
50
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 50 The 30 th List as of November 2007 Architectures / Systems
51
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 51 Performance Development and Projection Performance Development Jack‘s Notebook
52
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 52 6-8 years 8-10 years Performance Development and Projection Performance Projection 1 Pflop/s Jack‘s Notebook
53
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 53 Analysis of TOP500 Data Annual performance growth about a factor of 1.82 Two factors contribute almost equally to the annual total performance growth Processor number grows per year on the average by a factor of 1.30 and the Processor performance grows by 1.40 compared to 1.58 of Moore's Law Strohmaier, Dongarra, Meuer, and Simon, Parallel Computing 25, 1999, pp 1517-1544.
54
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 54 Bell‘s Law Bell's Law of Computer Class formation was discovered about 1972. It states that technology advances in semiconductors, storage, user interface and networking advance every decade enable a new, usually lower priced computing platform to form. Once formed, each class is maintained as a quite independent industry structure. This explains mainframes, minicomputers, workstations and Personal computers, the web, emerging web services, palm and mobile devices, and ubiquitous interconnected networks. We can expect home and body area networks to follow this path. From Gordon Bell (2007), http://research.microsoft.com/~GBell/Pubs.htm
55
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 55 Bell‘s Law Bell’s Law states, that: Important classes of computer architectures come in cycles of about 10 years. It takes about a decade for each phase : Early research Early adoption and maturation Prime usage Phase out past its prime Can we use Bell’s Law to classify computer architectures in the TOP500?
56
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 56 Bell‘s Law Gordon Bell (1972): 10 year cycles for computer classes Computer classes in HPC based on the TOP500: Data Parallel Systems: Vector (Cray Y-MP and X1, NEC SX, …) SIMD (CM-2, …) Custom Scalar Systems: MPP (Cray T3E and XT3, IBM SP, …) Scalar SMPs and Constellations (Cluster of big SMPs) Commodity Cluster: NOW, PC cluster, Blades, … Power-Efficient Systems (BG/L or BG/P as first example of low- power / embedded systems = potential new class ?) Tsubame with Clearspeed, Roadrunner with Cell ?
57
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 57 Bell‘s Law Computer Classes / Systems
58
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 58 Bell‘s Law Computer Classes - refined / Systems
59
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 59 Bell‘s Law ClassEarly Adoption starts: Prime Use starts: Past Prime Usage starts: Data Parallel Systems Mid 70’sMid 80’sMid 90’s Custom Scalar Systems Mid 80’sMid 90’sMid 2000’s Commodity Cluster Mid 90’sMid 2000’sMid 2010’s ??? BG/L or BG/PMid 2000’sMid 2010’s ???Mid 2020’s ??? HPC Computer Classes
60
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 60 Supercomputing, quo vadis? Countries/Systems (November 2007)
61
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 61 Supercomputing, quo vadis? Countries/Performance (November 2007)
62
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 62 Supercomputing, quo vadis? TOP50 Countries/Systems (November 2007)
63
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 63 Supercomputing, quo vadis? Continents/Systems
64
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 64 Supercomputing, quo vadis? Asian Countries / Systems
65
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 65 Supercomputing, quo vadis? Producing Regions / Systems
66
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 66 www.top500.org Top500, quo vadis?
67
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 67 Top500, quo vadis? TOP500 proofed itself by correcting the Mannheim Supercomputer Statistics. It is simplistic, but (or because of it) it gets trends right. It does not easily allow to track market size It’s inventory based. This smoothes seasonal fluctuation. Turn-over very high, so it still reflects recent developments. Summary after Fifteen Years of Experience Replacement Rate
68
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 68 Motivation for Additional Benchmarks Clearly need something more than Linpack HPC Challenge Benchmark and others Cons Emphasizes only “peak” CPU speed and number of CPUs Does not stress local bandwidth Does not stress the network No single number can reflect overall performance Top500, quo vadis? Linpack Benchmark Pros One number Simple to define and rank Allows problem size to change with machine and over time Allowing Competitions
69
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 69 Top500, quo vadis? Presented at ISC’06 June 27-30 2006
70
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 70 Top500, quo vadis?
71
Slides from TOP500 (Meuer, Dongarra, Strohmaier, Simon) 2007 page 71 Top500, quo vadis? http://www.green500.org Green500 Rank MFLOPS/WSiteComputer Total Power (kW) TOP500 Rank 1357.23 Science and Technology Facilities Council - Daresbury Laboratory Blue Gene/P Solution 31.10121 2352.25 Max-Planck-Gesellschaft MPI/IPP Blue Gene/P Solution 62.2040 3346.95IBM - Rochester Blue Gene/P Solution 124.4024 4336.21 Forschungszentrum Juelich (FZJ) Blue Gene/P Solution 497.602 5310.93 Oak Ridge National Laboratory Blue Gene/P Solution 70.4741 6210.56Harvard University eServer Blue Gene Solution 44.80170 7210.56 High Energy Accelerator Research Organization /KEK eServer Blue Gene Solution 44.80171 8210.56 IBM - Almaden Research Center eServer Blue Gene Solution 44.80172 9210.56IBM Research eServer Blue Gene Solution 44.80173 10210.56IBM Thomas J. Watson Research Center eServer Blue Gene Solution 44.80174
72
01/28/2009 CS267 Lecture 472 Summary Historically, each parallel machine was unique, along with its programming model and programming language. It was necessary to throw away software and start over with each new kind of machine. Now we distinguish the programming model from the underlying machine, so we can write portably correct codes that run on many machines. MPI now the most portable option, but can be tedious. Writing portably fast code requires tuning for the architecture. Algorithm design challenge is to make this process easy. Example: picking a blocksize, not rewriting whole algorithm.
73
01/28/2009 CS267 Lecture 473 Reading Assignment Extra reading for today Cray X1 http://www.sc-conference.org/sc2003/paperpdfs/pap183.pdf Clusters http://www.mirror.ac.uk/sites/www.beowulf.org/papers/ICPP95/ "Parallel Computer Architecture: A Hardware/Software Approach" by Culler, Singh, and Gupta, Chapter 1.Parallel Computer Architecture: A Hardware/Software Approach Next week: Current high performance architectures Shared memory (for Monday) Memory Consistency and Event Ordering in Scalable Shared- Memory Multiprocessors, Gharachorloo et al, Proceedings of the International symposium on Computer Architecture, 1990.Memory Consistency and Event Ordering in Scalable Shared- Memory Multiprocessors Or read about the Altix system on the web (www.sgi.com) Blue Gene L (for Wednesday) http://sc-2002.org/paperpdfs/pap.pap207.pdf
74
01/28/2009 CS267 Lecture 474 Open Source Software Model for HPC Linus's law, named after Linus Torvalds, the creator of Linux, states that "given enough eyeballs, all bugs are shallow". All source code is “open” Everyone is a tester Everything proceeds a lot faster when everyone works on one code (HPC: nothing gets done if resources are scattered) Software is or should be free (Stallman) Anyone can support and market the code for any price Zero cost software attracts users! Prevents community from losing HPC software (CM5, T3E)
75
01/28/2009 CS267 Lecture 475 Cluster of SMP Approach A supercomputer is a stretched high-end server Parallel system is built by assembling nodes that are modest size, commercial, SMP servers – just put more of them together Image from LLNL
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.