Presentation is loading. Please wait.

Presentation is loading. Please wait.

 Culler 1997 CS267 L28 Sort.1 CS 267 Applications of Parallel Computers Lecture 28: LogP and the Implementation and Modeling of Parallel Sorts James.

Similar presentations


Presentation on theme: " Culler 1997 CS267 L28 Sort.1 CS 267 Applications of Parallel Computers Lecture 28: LogP and the Implementation and Modeling of Parallel Sorts James."— Presentation transcript:

1  Culler 1997 CS267 L28 Sort.1 CS 267 Applications of Parallel Computers Lecture 28: LogP and the Implementation and Modeling of Parallel Sorts James Demmel (taken from David Culler, Lecture 18, CS267, 1997) http://www.cs.berkeley.edu/~demmel/cs267_Spr99

2  Culler 1997 CS267 L28 Sort.2 Practical Performance Target (circa 1992) Sort one billion large keys in one minute on one thousand processors. Good sort on a workstation can do 1 million keys in about 10 seconds –just fits in memory –16 bit Radix Sort Performance unit: µs per key per processor – s ~ 10 for single Sparc 2

3  Culler 1997 CS267 L28 Sort.3 Studies on Parallel Sorting Sorting Networks PRAM Sorts MEM ppp °°° Sorting on Network Y P M network P M P M °°° LogP Sorts Sorting on Machine X

4  Culler 1997 CS267 L28 Sort.4 The Study Interesting Parallel Sorting Algorithms Analyze under LogP Parameters for CM-5 Estimate Execution Time Implement in Split-C Execute on CM-5 Compare ?? (Bitonic, Column, Histo- radix, Sample)

5  Culler 1997 CS267 L28 Sort.5 LogP

6  Culler 1997 CS267 L28 Sort.6 Deriving the LogP Model ° Processing – powerful microprocessor, large DRAM, cache=> P ° Communication + significant latency (100's of cycles)=> L + limited bandwidth (1 – 5% of memory bw)=> g + significant overhead(10's – 100's of cycles)=> o - on both ends – no consensus on topology => should not exploit structure + limited capacity – no consensus on programming model => should not enforce one

7  Culler 1997 CS267 L28 Sort.7 LogP Interconnection Network MP MPMP ° ° ° P ( processors ) Limited Volume (L/ g to or from a proc) o (overhead) L (latency) o g (gap) L atency in sending a (small) mesage between modules o verhead felt by the processor on sending or receiving msg g ap between successive sends or receives (1/BW) P rocessors

8  Culler 1997 CS267 L28 Sort.8 Using the Model ° Send n messages from proc to proc in time 2o + L + g(n-1) – each processor does o n cycles of overhead – has (g-o)(n-1) + L available compute cycles ° Send n messages from one to many in same time ° Send n messages from many to one in same time – all but L/g processors block so fewer available cycles oL o o o g L time P P

9  Culler 1997 CS267 L28 Sort.9 Use of the Model (cont) ° Two processors sending n words to each other (i.e., exchange) in time 2o + L + max(g,2o) (n-1)  max(g,2o) + L ° P processors each sending n words to all processors (n/P each) in a static, balanced pattern without conflicts, e.g., transpose, fft, cyclic-to-block, block-to-cyclic same exercise: what’s wrong with the formula above? Assumes optimal pattern of send/receive, so could underestimate time

10  Culler 1997 CS267 L28 Sort.10 LogP "philosophy" Think about: – mapping of N words onto P processors – computation within a processor, its cost, and balance – communication between processors, its cost, and balance given a charaterization of processor and network performance Do not think about what happens within the network This should be good enough!

11  Culler 1997 CS267 L28 Sort.11 Typical Sort Exploits the n = N/P grouping ° Significant local computation ° Very general global communication / transformation ° Computation of the transformation

12  Culler 1997 CS267 L28 Sort.12 Split-C Global Address Space P0P0 P procs-1 P1P1 local Explicitly parallel C 2D global address space –linear ordering on local spaces Local and Global pointers –spread arrays too Read/Write Get/Put (overap compute and comm) –x := G;... –sync(); Signaling store (one-way) –G :– x;... –store_sync(); or all_store_sync(); Bulk transfer Global comm.

13  Culler 1997 CS267 L28 Sort.13 Basic Costs of operations in Split-C Read, Writex = *G, *G = x2 (L + 2o) Store*G :– xL + 2o Getx := *Go....2L + 2o sync();o –with interval g Bulk store (n words with  words/message) 2o + (n-1)g + L Exchange2o + 2L + (  n  L/g) max(g,2o) One to many Many to one

14  Culler 1997 CS267 L28 Sort.14 LogP model CM5: –L = 6 µs –o = 2.2 µs –g = 4 µs –P varies from 32 to 1024 NOW –L = 8.9 –o = 3.8 –g = 12.8 –P varies up to 100 What is the processor performance?

15  Culler 1997 CS267 L28 Sort.15 Sorting

16  Culler 1997 CS267 L28 Sort.16 Local Sort Performance (11 bit radix sort of 32 bits numbers) Log N/P µs / Key 0 1 2 3 4 5 6 7 8 9 10 05 1520 31 25.1 16.9 10.4 6.2 Entropy in Key Values Entropy = -  i p i log p i, p i = Probability of key i

17  Culler 1997 CS267 L28 Sort.17 Local Computation Parameters - Empirical ParameterOperationµs per keySort SwapSimulate cycle butterfly per key0.025 lg NBitonic mergesortSort bitonic sequence1.0 scatterMove key for Cyclic-to-block0.46 gatherMove key for Block-to-cyclic0.52 if n<=64k or P<=64 Bitonic & Column 1.1 otherwise local sortLocal radix sort (11 bit)4.5 if n < 64K 9.0 - (281000/n) mergeMerge sorted lists1.5Column copyShift Key0.5 zeroClear histogram bin0.2Radix histproduce histogram1.2 addproduce scan value1.0 bsumadjust scan of bins2.5 addressdetermine desitination4.7 comparecompare key to splitter0.9Sample localsort8local radix sort of samples5.0

18  Culler 1997 CS267 L28 Sort.18 Bottom Line (Preview) N/P us/key 0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 163843276865536 131072262144524288 1048576 Bitonic 1024 Bitonic 32 Column 1024 Column 32 Radix 1024 Radix 32 Sample 1024 Sample 32 Good fit between predicted and measured (10%) Different sorts for different sorts –scaling by processor, input size, sensitivity All are global / local hybrids –the local part is hard to implement and model

19  Culler 1997 CS267 L28 Sort.19 Odd-Even Merge - classic parallel sort N values to be sorted A 0 A 1 A 2 A 3 A M-1 B 0 B 1 B 2 B 3 B M-1 Treat as two lists of M = N/2 Sort each separately A 0 A 2 … A M-2 B 0 B 2 … B M-2 Redistribute into even and odd sublists A 1 A 3 … A M-1 B 1 B 3 … B M-1 Merge into two sorted lits E 0 E 1 E 2 E 3 E M-1 O 0 O 1 O 2 O 3 O M-1 Pairwise swaps of Ei and Oi will put it in order

20  Culler 1997 CS267 L28 Sort.20 Where’s the Parallelism? E 0 E 1 E 2 E 3 E M-1 O 0 O 1 O 2 O 3 O M-1 1xN 4xN/4 2xN/2

21  Culler 1997 CS267 L28 Sort.21 Mapping to a Butterfly (or Hypercube) A0A1A2A3B0B1B2B3 A0A1A2A3 A0A1A2A3 B0B1B3B2 B3B1B0 A0A1A2A3B2B3B1B0 Reverse Order of one list via cross edges two sorted sublists Pairwise swaps on way back 23487651 234765 8 1 246 8 1357 12345678

22  Culler 1997 CS267 L28 Sort.22 Bitonic Sort with N/P per node all_bitonic(int A[PROCS]::[n]) sort(tolocal(&A[ME][0]),n,0) for (d = 1; d <= logProcs; d++) for (i = d-1; i >= 0; i--) { swap(A,T,n,pair(i)); merge(A,T,n,mode(d,i)); } sort(tolocal(&A[ME][0]),n,mask(i)); sort swap A bitonic sequence decreases and then increases (or vice versa) Bitonic sequences can be merged like monotonic sequences

23  Culler 1997 CS267 L28 Sort.23 Bitonic Sort lg N/p stages are local sort Block Layout remaining stages involve Block-to-cyclic, local merges (i - lg N/P cols) cyclic-to-block, local merges ( lg N/p cols within stage)

24  Culler 1997 CS267 L28 Sort.24 Analysis of Bitonic How do you do transpose? Reading Exercise

25  Culler 1997 CS267 L28 Sort.25 Bitonic Sort: time per key Predicted N/P us/key 0 10 20 30 40 50 60 70 80 163843276865536 131072262144524288 1048576 Measured N/P us/key 0 10 20 30 40 50 60 70 80 163843276865536 131072262144524288 1048576 512 256 128 64 32

26  Culler 1997 CS267 L28 Sort.26 Bitonic: Breakdown P= 512, random

27  Culler 1997 CS267 L28 Sort.27 Bitonic: Effect of Key Distributions P = 64, N/P = 1 M

28  Culler 1997 CS267 L28 Sort.28 Column Sort (3) Sort (2) Transpose - block to cyclic (1) Sort (4) Transpose - cyclic to block w/o scatter (6) shift (5) Sort (8) Unshift (7) merge work efficient Treat data like n x P array, with n >= P^2, I.e. N >= P^3

29  Culler 1997 CS267 L28 Sort.29 Column Sort: Times Predicted N/P us/key 0 5 10 15 20 25 30 35 40 163843276865536 131072262144524288 1048576 Measured N/P us/key 0 5 10 15 20 25 30 35 40 163843276865536 131072262144524288 1048576 512 256 128 64 32 Only works for N >= P^3

30  Culler 1997 CS267 L28 Sort.30 Column: Breakdown P= 64, random

31  Culler 1997 CS267 L28 Sort.31 Column: Key distributions Entropy (bits) µs / key 0 5 10 15 20 25 30 35 0610 172531 Merge Sorts Remaps Shifts P = 64, N/P = 1M

32  Culler 1997 CS267 L28 Sort.32 Histo-radix sort P n=N/P Per pass: 1. compute local histogram 2. compute position of 1st member of each bucket in global array – 2^r scans with end- around 3. distribute all the keys Only r = 8,11,16 make sense for sorting 32 bit numbers 2^r 2 3

33  Culler 1997 CS267 L28 Sort.33 Histo-Radix Sort (again) Local Data Local Histograms Each Pass form local histograms form global histogram globally distribute data P

34  Culler 1997 CS267 L28 Sort.34 Radix Sort: Times Predicted N/P us/key 0 20 40 60 80 100 120 140 163843276865536 131072262144524288 1048576 Measured N/P us/key 0 20 40 60 80 100 120 140 163843276865536 131072262144524288 1048576 512 256 128 64 32

35  Culler 1997 CS267 L28 Sort.35 Radix: Breakdown

36  Culler 1997 CS267 L28 Sort.36 Radix: Key distribution Slowdown due to contention in redistribution

37  Culler 1997 CS267 L28 Sort.37 Radix: Stream Broadcast Problem n (P-1) ( 2o + L + (n-1) g ) ? Need to slow first processor to pipeline well

38  Culler 1997 CS267 L28 Sort.38 What’s the right communication mechanism? Permutation via writes –consistency model? –false sharing? Reads? Bulk Transfers? –what do you need to change in the algorithm? Network scheduling?

39  Culler 1997 CS267 L28 Sort.39 Sample Sort 1. compute P-1 values of keys that would split the input into roughly equal pieces. – take S~64 samples per processor – sort PS keys – take key S, 2S,... (P-1)S – broadcast splitters 2. Distribute keys based on splitters 3. Local sort [4.] possibly reshift

40  Culler 1997 CS267 L28 Sort.40 Sample Sort: Times Predicted N/P us/key 0 5 10 15 20 25 30 163843276865536 131072262144524288 1048576 Measured N/P us/key 0 5 10 15 20 25 30 163843276865536 131072262144524288 1048576 512 256 128 64 32

41  Culler 1997 CS267 L28 Sort.41 Sample Breakdown N/P us/key 0 5 10 15 20 25 30 163843276865536 131072262144524288 1048576 Split Sort Dist Split-m Sort-m Dist-m

42  Culler 1997 CS267 L28 Sort.42 Comparison N/P us/key 0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 163843276865536 131072262144524288 1048576 Bitonic 1024 Bitonic 32 Column 1024 Column 32 Radix 1024 Radix 32 Sample 1024 Sample 32 Good fit between predicted and measured (10%) Different sorts for different sorts –scaling by processor, input size, sensitivity All are global / local hybrids –the local part is hard to implement and model

43  Culler 1997 CS267 L28 Sort.43 Conclusions Distributed memory model leads to hybrid global / local algorithms LogP model is good enough for the global part –bandwidth (g) or overhead (o) matter most –including end-point contention –latency (L) only matters when BW doesn’t –g is going to be what really matters in the days ahead (NOW) Local computational performance is hard! –dominated by effects of storage hierarchy (TLBs) –getting trickier with multilevels »physical address determines L2 cache behavior –and with real computers at the nodes (VM) –and with variations in model »cycle time, caches,... See http://www.cs.berkeley.edu/~culler/papers/sort.ps See http://now.cs.berkeley.edu/Papers2/Postscript/spdt98.ps –disk-to-disk parallel sorting


Download ppt " Culler 1997 CS267 L28 Sort.1 CS 267 Applications of Parallel Computers Lecture 28: LogP and the Implementation and Modeling of Parallel Sorts James."

Similar presentations


Ads by Google