Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September 21 2007 Xiaohong Qiu Research.

Similar presentations


Presentation on theme: "1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September 21 2007 Xiaohong Qiu Research."— Presentation transcript:

1 1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September 21 2007 Xiaohong Qiu Research Computing UITS, Indiana University Bloomington IN Geoffrey Fox, H. Yuan, Seung-Hee Bae Community Grids Laboratory, Indiana University Bloomington IN 47404 George Chrysanthakopoulos, Henrik Frystyk Nielsen Microsoft Research, Redmond WA Presented by Geoffrey Fox gcf@indiana.edugcf@indiana.edu http://www.infomall.org

2 2 Motivation Exploring possible applications for tomorrow’s multicore chips (especially clients) with 64 or more cores (about 5 years) One plausible set of applications is data-mining of Internet and local sensors Developing Library of efficient data-mining algorithms –Clustering (GIS, Cheminformatics) and Hidden Markov Methods (Speech Recognition) Choose algorithms that can be parallelized well

3 3 Approach Need 3 forms of parallelism –MPI Style –Dynamic threads as in pruned search –Coarse Grain functional parallelism Do not use an integrated language approach as in Darpa HPCS Rather use “mash-ups” or “workflow” to link together modules in optimized parallel libraries Use Microsoft CCR/DSS where DSS is mash- up/workflow model built from CCR and CCR supports MPI or Dynamic threads

4 4 Microsoft CCR Supports exchange of messages between threads using named ports FromHandler: Spawn threads without reading ports Receive: Each handler reads one item from a single port MultipleItemReceive: Each handler reads a prescribed number of items of a given type from a given port. Note items in a port can be general structures but all must have same type. MultiplePortReceive: Each handler reads a one item of a given type from multiple ports. JoinedReceive: Each handler reads one item from each of two ports. The items can be of different type. Choice: Execute a choice of two or more port-handler pairings Interleave: Consists of a set of arbiters (port -- handler pairs) of 3 types that are Concurrent, Exclusive or Teardown (called at end for clean up). Concurrent arbiters are run concurrently but exclusive handlers are http://msdn.microsoft.com/robotics/

5 Preliminary Results Parallel Deterministic Annealing Clustering in C# with speed-up of 7 on Intel 2 quadcore systems Analysis of performance of Java, C, C# in MPI and dynamic threading with XP, Vista, Windows Server, Fedora, Redhat on Intel/AMD systems Study of cache effects coming with MPI thread-based parallelism Study of execution time fluctuations in Windows (limiting speed-up to 7 not 8!)

6 Machines Used AMD4: HPxw9300 workstation, 2 AMD Opteron CPUs Processor 275 at 2.19GHz, 4 cores L2 Cache 4x1MB (summing both chips), Memory 4GB, XP Pro 64bit, Windows Server, Red Hat C# Benchmark Computational unit: 1.388 µs Intel4: Dell Precision PWS670, 2 Intel Xeon Paxville CPUs at 2.80GHz, 4 cores L2 Cache 4x2MB, Memory 4GB, XP Pro 64bit C# Benchmark Computational unit: 1.475 µs Intel8a: Dell Precision PWS690, 2 Intel Xeon CPUs E5320 at 1.86GHz, 8 cores L2 Cache 4x4M, Memory 8GB, XP Pro 64bit C# Benchmark Computational unit: 1.696 µs Intel8b: Dell Precision PWS690, 2 Intel Xeon CPUs E5355 at 2.66GHz, 8 cores L2 Cache 4x4M, Memory 4GB, Vista Ultimate 64bit, Fedora 7 C# Benchmark Computational unit: 1.188 µs Intel8c: Dell Precision PWS690, 2 Intel Xeon CPUs E5345 at 2.33GHz, 8 cores L2 Cache 4x4M, Memory 8GB, Red Hat 5.0, Fedora 7

7 Basic Performance of CCR

8 AMD4: 4 CoreNumber of Parallel Computations (μs)123478 Spawned Pipeline 1.764.524.44.841.428.54 Shift 4.484.624.80.848.94 Two Shifts 7.448.910.1812.7423.92 (MPI) Pipeline 3.75.886.526.748.5414.98 Shift 6.88.429.362.7411.16 Exchange As Two Shifts 14.115.919.1411.7822.6 Exchange 10.3215.516.311.321.38 CCR Overhead for a computation of 27.76 µs between messaging Rendez vous

9 CCR Overhead for a computation of 29.5 µs between messaging Rendez vous Intel4: 4 CoreNumber of Parallel Computations (μs) 123478 Spawned Pipeline3.328.39.3810.183.0212.12 Shift8.39.3410.084.3813.52 Two Shifts17.6419.322128.7444.02 MPI Pipeline9.3612.0813.0213.5816.6825.68 Shift12.5613.714.44.7215.94 Exchange As Two Shifts 23.7627.4830.6422.1436.16 Exchange18.4824.0225.762034.56

10 CCR Overhead for a computation of 23.76 µs between messaging Rendez vous Intel8b: 8 CoreNumber of Parallel Computations (μs) 123478 Spawned Pipeline1.582.4432.944.55.06 Shift2.423.23.385.265.14 Two Shifts4.945.96.8414.3219.44 MPI Pipeline2.483.964.525.786.827.18 Shift4.466.425.8610.8611.74 Exchange As Two Shifts 7.411.6414.1631.8635.62 Exchange6.9411.2213.318.7820.16

11 Overhead (latency) of AMD4 PC with 4 execution threads on MPI style Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern Stages (millions) Time Microseconds

12 Overhead (latency) of Intel8b PC with 8 execution threads on MPI style Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern Stages (millions) Time Microseconds

13 Basic Performance of MPI for C and Java

14 MPI Exchange Latency in µs with 500,000 stages (20-30 µs computation between messaging) MachineOSRuntimeGrainsParallelismMPI Exchange Latency Intel8c:gf12RedhatMPJEProcess8181 MPICH2Process840.0 MPICH2: FastProcess839.3 NemesisProcess84.21 Intel8c:gf20FedoraMPJEProcess8157 mpiJavaProcess8111 MPICH2Process864.2 Intel8bVistaMPJEProcess8170 FedoraMPJEProcess8142 FedorampiJavaProcess8100 VistaCCRThread820.2 AMD4XPMPJEProcess4185 RedhatMPJEProcess4152 RedhatmpiJavaProcess499.4 RedhatMPICH2Process439.3 XPCCRThread416.3 Intel4XPCCRThread425.8

15 0246810 Stages (millions) MPICH mpiJava MPJE MPI Shift Latency on AMD4

16 0 246810 Stages (millions) MPICH mpiJava MPJE MPI Exchange Latency on AMD4

17 0 246810 Stages (millions) MPICH Nemesis MPJE MPI Exchange Latency on Intel8c RedHat

18 Cache Line Interference

19 Early implementations of our clustering algorithm showed large fluctuations due to the cache line interference effect discussed here and on next slide in a simple case We have one thread on each core each calculating a sum of same complexity storing result in a common array A with different cores using different array locations Thread i stores sum in A(i) is separation 1 – no variable access interference but cache line interference Thread i stores sum in A(X*i) is separation X Serious degradation if X < 8 (64 bytes) with Windows –Note A is a double (8 bytes) –Less interference effect with Linux – especially Red Hat

20 Cache Line Interference Note measurements at a separation of 8 (and values between 8 and 1024 not shown) are essentially identical Measurements at 7 (not shown) are higher than that at 8 (except for Red Hat which shows essentially no enhancement at X<8) If effects due to co-location of thread variables in a 64 byte cache line, the array must be aligned with cache boundaries –In early implementations we found poor X=8 performance expected in words of A split across cache lines

21 Clustering Problem

22 Deterministic Annealing See K. Rose, "Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems," Proceedings of the IEEE, vol. 80, pp. 2210-2239, November 1998 Parallelization is similar to ordinary K-Means as we are calculating global sums which are decomposed into local averages and then summed over components calculated in each processor Many similar data mining algorithms (such as annealing for E-M expectation maximization) which have high parallel efficiency and avoid local minima For more details see –http://grids.ucs.indiana.edu/ptliupages/presentations/Grid 2007PosterSept19-07.ppt andhttp://grids.ucs.indiana.edu/ptliupages/presentations/Grid 2007PosterSept19-07.ppt –http://grids.ucs.indiana.edu/ptliupages/presentations/PC2 007/PC07BYOPA.ppthttp://grids.ucs.indiana.edu/ptliupages/presentations/PC2 007/PC07BYOPA.ppt

23 Parallel Multicore Deterministic Annealing Clustering Parallel Overhead on 8 Threads Intel 8b Speedup = 8/(1+Overhead) 10000/(Grain Size n = points per core) Overhead = Constant1 + Constant2/n Constant1 = 0.05 to 0.1 (Client Windows) due to thread runtime fluctuations 10 Clusters 20 Clusters

24 Parallel Multicore Deterministic Annealing Clustering “Constant1” Increasing number of clusters decreases communication/memory bandwidth overheads Parallel Overhead for large (2M points) Indiana Census clustering on 8 Threads Intel 8b This fluctuating overhead due to 5-10% runtime fluctuations between threads

25 Scaled Speed up Tests The full clustering algorithm involves different values of the number of clusters N C as computation progresses The amount of computation per data point is proportional to N C and so overhead due to memory bandwidth (cache misses) declines as N C increases We did a set of tests on the clustering kernel with fixed N C Further we adopted the scaled speed-up approach looking at the performance as a function of number of parallel threads with constant number of data points assigned to each thread –This contrasts with fixed problem size scenario where the number of data points per thread is inversely proportional to number of threads We plot Run time for same workload per thread divided by number of data points multiplied by number of clusters multiped by time at smallest data set (10,000 data points per thread) Expect this normalized run time to be independent of number of threads if not for parallel and memory bandwidth overheads –It will decrease as N C increases as number of computations per points fetched from memory increases proportional to N C

26 Intel 8b C with 1 Cluster: Vista Scaled Run Time for Clustering Kernel Note the smallest dataset has highest overheads as we increase the number of threads –Not clear why this is Number of Threads Scaled Run Time

27 Intel 8b C with 80 Clusters: Vista Scaled Run Time for Clustering Kernel As we increase number of clusters, the effects at 10,000 data points decrease Number of Threads Scaled Run Time

28 Intel 8b C# with 1 Cluster: Vista Scaled Run Time for Clustering Kernel C# is similar to C with larger effects Number of Threads Scaled Run Time

29 Intel 8b C# with 1 Cluster: Vista Run Time Fluctuations for Clustering Kernel This is average of standard deviation of run time of the 8 threads between messaging synchronization points Number of Threads Standard Deviation/Run Time

30 Intel 8b C# with 80 Clusters: Vista Scaled Run Time for Clustering Kernel C# is similar to C with larger effects Number of Threads Scaled Run Time

31 AMD4 C with 1 Cluster: XP Scaled Run Time for Clustering Kernel This is significantly more stable than Intel runs and shows little or no memory bandwidth effect Number of Threads Scaled Run Time

32 AMD4 C# with 1 Cluster: XP Scaled Run Time for Clustering Kernel This is significantly more stable than Intel C# 1 Cluster runs Number of Threads Scaled Run Time

33 AMD4 C# with 80 Clusters: XP Scaled Run Time for Clustering Kernel This is broadly similar to 80 Cluster Intel C# runs unlike one cluster case that was very different Number of Threads Scaled Run Time

34 AMD4 C# with 1 Cluster: Windows Server Scaled Run Time for Clustering Kernel This is significantly more stable than Intel C# runs Number of Threads Scaled Run Time

35 AMD4 C# with 80 Clusters: Windows Server Scaled Run Time for Clustering Kernel Curiously run time decreases a bit as number of threads increases in some AMD4 scenarios Number of Threads Scaled Run Time

36 Intel 8c C with 1 Cluster: Red Hat Scaled Run Time for Clustering Kernel Deviations from “perfect” scaled speed-up are much less for Red Hat than for Windows Number of Threads Scaled Run Time

37 Intel 8c C with 80 Clusters: Red Hat Scaled Run Time for Clustering Kernel Deviations from “perfect” scaled speed-up are much less for Red Hat Number of Threads Scaled Run Time

38 Intel 8b C# with 80 Clusters: Vista Run Time Fluctuations for Clustering Kernel This is average of standard deviation of run time of the 8 threads between messaging synchronization points Number of Threads Standard Deviation/Run Time

39 AMD4 with 1 Cluster: Windows Server Run Time Fluctuations for Clustering Kernel This is average of standard deviation of run time of the 8 threads between messaging synchronization points XP (not shown) is similar Number of Threads Standard Deviation/Run Time

40 Intel 8c with 80 Clusters: Redhat Run Time Fluctuations for Clustering Kernel This is average of standard deviation of run time of the 8 threads between messaging synchronization points Number of Threads Standard Deviation/Run Time

41 DSS Section We view system as a collection of services – in this case –One to supply data –One to run parallel clustering –One to visualize results – in this by spawning a Google maps browser –Note we are clustering Indiana census data DSS is convenient as built on CCR

42 42 Timing of HP Opteron Multicore as a function of number of simultaneous two- way service messages processed (November 2006 DSS Release) Measurements of Axis 2 shows about 500 microseconds – DSS is 10 times better DSS Service Measurements

43 Clustering algorithm annealing by decreasing distance scale and gradually finds more clusters as resolution improved Here we see 10 increasing to 30 as algorithm progresses

44

45

46

47

48

49


Download ppt "1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September 21 2007 Xiaohong Qiu Research."

Similar presentations


Ads by Google