Presentation is loading. Please wait.

Presentation is loading. Please wait.

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the.

Similar presentations


Presentation on theme: "SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the."— Presentation transcript:

1 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the Gordon Supercomputer Dongju Choi, Glenn Lockwood, Robert Sinkovits, Mahidhar Tatineni San Diego Supercomputer Center University of California, San Diego

2 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Background SDSC data intensive supercomputer Gordon: 1,024 dual-socket Intel Sandy Bridge nodes, each with 64 GB DDR3–1333 memory 16 cores per node and 16 nodes (256 cores) per switch Large IO nodes and local/global ssd disks Dual rails QDR InfiniBand network supports IO and Compute communication separately. Can be scheduled to be used for computation also. We have been interested witch communication oversubscription in switch-to-switch and switch/node topology effects on application performance.

3 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon System Architecture 3-D torus of switches on Gordon Subrack level network architecture on Gordon

4 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO MVAPICH2 MPI Implementation MVAPICH2 current version 1.9, 2.0 on the Gordon system Full control of dual rail usage at the task level via user settable environment variables: MV2_NUM_HCAS=2, MV2_IBA_HCA=mlx4_0:mlx4_1 MV2_RAIL_SHARING_LARGE_MSG_THRESHOLD=8000: can be as low as 8KB, MV2_SM_SCHEDULING=ROUND_ROBIN: explicitly distribute tasks over rails

5 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO OSU Micro-Benchmarks Compare the performance of single and dual rail QDR InfiniBand vs FDR InfiniBand: evaluate the impact of rail sharing, scheduling, and threshold parameters Bandwidth tests Latency tests

6 OSU Bandwidth Test Results for Single Rail QDR, FDR, and Dual-Rail QDR Network Configurations -Single rail FDR performance is much better than single rail QDR for message sizes larger than 4K bytes -Dual rail QDR performance exceeds FDR performance at sizes greater than 32K -FDR showing better performance between 4K and 32K byte sizes due to the rail-sharing threshold

7 OSU Bandwidth Test Performance with MV2_RAIL_SHARING_LARGE_MSG_THRESHOLD=8K - Lowering the rail sharing threshold bridges the dual-rail QDR, FDR performance gap down to 8K bytes.

8 OSU Bandwidth Test Performance with MV2_SM_SCHEDULING = ROUND_ROBIN - Adding explicit round-robin tasks to communicate over different rails

9 OSU Latency Benchmark Results for QDR, Dual-Rail QDR with MVAPICH2 Defaults, FDR - There is no latency penalty at small message sizes (expected as only one rail is active below the striping threshold). - Above the striping threshold a minor increase in latency is observed but the performance is still better than single rail FDR.

10 OSU Latency Benchmark Results for QDR, Dual-Rail QDR with Round Robin Option, FDR - Distributing messages across HCAs using the round-robin option increases the latency at small message sizes. - Again, the latency results are better than the FDR case.

11 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Application Performance Benchmarks Applications P3DFFT Benchmark LAMMPS Water Box Benchmark AMBER Cellulose Benchmark Test Configuration Single Rail vs. Dual Rails Multiple Switch Runs with Maximum Hops=1 or no hops limit for 512 core runs (2 switches are involved)

12 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO P3DFFT Benchmark Parallel Three-Dimensional Fast Fourier Transforms Used for studies of turbulence, climatology, astrophysics and material science Depends strongly on the available bandwidth as the main communication component is driven by transposes of large arrays (alltoallv)

13 Simulation Results for P3DFFT benchmark with 256 cores and QDR, Dual-Rail QDR Run# QDR Wallclock Time (s) Dual-Rail QDR Wallclock Time (s) 1992761 2985760 3991766 4993759 -Dual-rail runs are consistently faster than the single rail runs, with an average performance gain of 23%.

14 Communication and Compute Time Breakdown for 256 core, Single/Dual QDR rail P3DFFT Runs. Run # Total Time Comm. Time Compute Time 1992539453 2985535450 3991539452 4993543450 Run # Total Time Comm. Time Compute Time 1761302459 2760301459 3766308458 4759300459 -Compute part is nearly identical in both sets of runs -Performance improvement is almost entirely in the communication part of the code -Shows that Dual rail boosts the alltoallv performance and consequently speeds up the overall calculation Single Rail RunsDual Rail Runs

15 Communication and Compute Time Breakdown for 512 core, Single/Dual QDR Rail P3DFFT Runs. Maximum Switch Hops=1 Run # Total Time Comm. Time Compute Time 1802592210 2802592210 3804594210 4803592211 Run # Total Time Comm. Time Compute Time 1537322215 2538322216 3538322216 4538322216 Single Rail RunsDual Rail Runs -Shows similar dual rail benefits -Fewer runs pans/links, reducing the likelihood of oversubscription due to other jobs -Also can increase the likelihood of oversubscription due to lesser switch connections

16 P3DFFT benchmark with 512 cores, Single Rail QDR. No Switch Hop Restriction Run #Total TimeComm. TimeCompute Time 1717506211 2732525207 3789580209 4726518208 5825615210 6697488209 -oversubscription is mitigated by topology of the run and the performance is nearly 15% better than the single hop case. However, as seen from the results a different topology may also lead to lower performance if the distribution is not optimal (it could be by oversubscription of the job itself or from other jobs).

17 -Spread out the computation on several switches. Lowering bandwidth requirements on a given set of switch-to-switch links -bad for latency bound codes (given the extra switch hops) but benefit bandwidth sensitive codes depending on the topology of the run -Nukada et. al. utilizes dynamic links to minimize congestion to perform better in the dual-rail case P3DFFT benchmark with 512 cores, Single Rail QDR. No Switch Hop Restriction Run #Total TimeComm. TimeCompute Time 1..3--- 4726518208 5825615210 6697488209 Nukada, A., Sato, K. and Matsuoka, S.. 2012. Scalable multi-GPU 3-D FFT for TSUBAME 2.0 supercomputer. In Proceedings of the International Conference on HighPerformance Computing, Networking, Storage and Analysis (SC '12). IEEE Computer Society Press, LosAlamitos, CA, USA, Article 44, 10 pages.

18 Communication and Compute Time Breakdown for 1024 Core P3DFFT Runs RunTotal Time Comm. Time Compute Time 140430797 240831098 -. No switch hop restrictions are placed on the runs. -. Communication aspect is greatly improved in the dual rail cases while compute fraction is the nearly identical in all the runs. RunTotal Time Comm. Time Compute Time 1332232100 232522699 Single Rail RunsDual Rail Runs

19 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO LAMMPS Water Box Benchmark Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a widely used classical molecular dynamics code. 12,000 water molecules (36,000 atoms) are set in the input Simulation is run for 20 picoseconds.

20 LAMMPS Water Box Benchmark with Single/Dual Rail QDR and 256 cores. Run # QDR Wallclock Time (s) Dual-Rail QDR Wallclock Time (s) 15746 25746 35846 45746 - Dual-Rail runs show better performance than the single rail runs and mitigate communication overhead with an average of 32% in wallclock time used. improvement

21 LAMMPS Water Box Benchmark with Single-Dual Rail QDR and 512 Cores Run # Single Rail QDR w MAX_HOP=1 Wallclock Time (s) Single Rail QDR w No Limit in MAX_HOP Wallclock Time (s) Dual Rail QDR Wallclock Time (s) 1697147 2697047 37028148 46945047 - Application is not scaling due to larger communication overhead (happens due to fine level of domain decomposition) - LAMMPS benchmark is very sensitive to topology and shows large variations if the maximum switch hops are not restricted

22 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO AMBER Cellulose Benchmark Amber is a package of programs for molecular dynamics simulations of proteins and nucleic acids. 408,609 atoms are used for the tests.

23 Amber Cellulose Benchmark with Single/Dual Rail QDR and 256 Cores Run # Single Rail QDR Wallclock Time (s) Dual Rail QDR Wallclock Time (s) 1218212 2219213 3218212 4219212 - Communication overhead is low and the dual rail benefit is minor (<3%)

24 Amber Cellulose Benchmark with Single/Dual Rail QDR, 512 cores Run # Single Rail QDR w MAX_HOP=1 Wallclock Time (s) Single Rail QDR w No Limit in MAX_HOP Wallclock Time (s) Dual Rail QDR Wallclock Time (s) 1204332168 2202331168 3202396168 4202373167 - There is a modest benefit (<5 %) in the single rail QDR runs - Communication overhead increases with increased core count, leading to the drop off in scaling. This can be mitigated with dual rail QDR - Dual rail QDR performance is better by 17%

25 Amber Cellulose Benchmark with Single/Dual Rail QDR, 512 cores - Dual rail enables the benchmark to scale to higher core count - Shows sensitivity to the topology due to the larger number of switch hops and possible contention from other jobs Run # Single Rail QDR w MAX_HOP=1 Wallclock Time (s) Single Rail QDR w No Limit in MAX_HOP Wallclock Time (s) Dual Rail QDR Wallclock Time (s) 1204332168 2202331168 3202396168 4202373167

26 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Summary Aggregated bandwidth obtained with dual rail QDR exceeds the FDR performance. Shows performance benefits from dual rail QDR configurations. Gordon’s 3-D torus of switches leads to variability in performance due to oversubscription/topology considerations. Switch topology can be configured to enable mitigation of the link oversubscription bottleneck.

27 SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Summary Performance improvement also varies based on the degree of communication overhead. Benchmark cases with larger communication fractions (with respect to overall run time) show more improvement with dual rail QDR configurations. Computational time scaled with the core counts in both single and dual rail configurations for the currently benchmarked applications: LAMMPS and Amber

28 Acknowlegements This work was supported by NSF grant: OCI #0910847 Gordon: A Data Intensive Supercomputer.


Download ppt "SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the."

Similar presentations


Ads by Google