Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-core and Network Aware MPI Topology Functions Mohammad J. Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi, and William D. Gropp Department of.

Similar presentations


Presentation on theme: "Multi-core and Network Aware MPI Topology Functions Mohammad J. Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi, and William D. Gropp Department of."— Presentation transcript:

1 Multi-core and Network Aware MPI Topology Functions Mohammad J. Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi, and William D. Gropp Department of Electrical and Computer Engineering, Queen’s University Mathematics and Computer Science, Argonne National Laboratory Department of Computer Science, University of Illinois at Urbana-Champaign

2 Ahmad Afsahi Parallel Processing Research Laboratory 1 Presentation Outline  Introduction  Background and Motivation  MPI Graph and Cartesian Topology Functions  Related Work  Design and Implementation of Topology Functions  Experimental Framework and Performance Results  Micro-benchmark Results  Applications Results  Concluding Remarks and Future Work

3 Ahmad Afsahi Parallel Processing Research Laboratory 2 Introduction  MPI is the main standard for communication in HPC clusters.  Scalability is the major concern for MPI over large-scale hierarchical systems.  System topology awareness is essential for MPI scalability:  Being aware of performance implications in each and every architectural hierarchy of the machine  Efficiently mapping processes to processor cores, based on applications’ communication pattern  Such functionality should be embedded in MPI topology interface

4 Ahmad Afsahi Parallel Processing Research Laboratory 3 Background and Motivation  MPI topology functions:  Define the communication topology of the application o Logical process arrangement or virtual topology  Possibly reorder the processes to efficiently map over the system architecture (physical topology) for more performance  Virtual topology models:  Cartesian topology: multi-dimensional Cartesian arrangement  Graph topology: non-specific graph arrangement  Graph topology representation  Non-distributed: easier to manage, less scalable  Distributed: new to the standard, more scalable

5 Ahmad Afsahi Parallel Processing Research Laboratory 4 Background and Motivation (II)  However, topology functions are mostly utilized for the construction of process arrangement (i.e., virtual topology).  Most MPI applications are not utilizing them for performance improvement  In addition, MPI implementations offer trivial functionality for these functions.  Mainly constructing the virtual topology  No reordering of the ranks; thus no performance improvement  This work designs topology functions with reorder ability:  Designing non-distributed API functions  Supporting multi-hierarchy nodes and networks

6 Ahmad Afsahi Parallel Processing Research Laboratory 5 MPI Graph and Cartesian Topology Functions  MPI defines a set of virtual topology definition functions for graph and Cartesian structures.  MPI_Graph_create and MPI_Cart_create non-distributed functions:  Are collective calls that accept a virtual topology  Return a new MPI communicator enclosing the desired topology  The input topology is in a non-distributed form  All nodes have a full view of the entire structure o Pass the whole information to the function  If the user opts for reordering, the function may reorder the ranks for an efficient process-to-core mapping.

7 Ahmad Afsahi Parallel Processing Research Laboratory 6 MPI Graph and Cartesian Topology Functions (II)  MPI_Cart_create(comm_old, ndims, dims, periods, reorder, comm_cart )  comm_old[in] input communicator without topology (handle)  ndims[in] number of dimensions of Cartesian grid (integer)  dims[in] integer array of size ndims specifying the number of processes in each dimension  periods[in] logical array of size ndims specifying whether the grid is periodic (true) or not (false) in each dimension  reorder[in] ranking may be reordered (true) or not (false) (logical)  comm_graph[out] communicator with Cartesian topology (handle) Dimension#Processes 1212 4242 ndims = 2 dims = 4, 2 periods = 1, 0 4x2 2D-Torus 01 5 4 23 7 6

8 Ahmad Afsahi Parallel Processing Research Laboratory 7 MPI Graph and Cartesian Topology Functions (III)  MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph )  comm_old[in] input communicator without topology (handle)  nnodes[in] number of nodes in graph (integer)  index[in] array of integers describing node degrees  edges[in] array of integers describing graph edges  reorder[in] ranking may be reordered (true) or not (false) (logical)  comm_graph[out] communicator with graph topology added (handle) ProcessNeighbors 01230123 1, 3 0 3 0, 2 01 2 3 nnodes = 4 index = 2, 3, 4, 6 edges = 1, 3, 0, 3, 0, 2

9 Ahmad Afsahi Parallel Processing Research Laboratory 8 Presentation Outline  Introduction  Background and Motivation  MPI Graph and Cartesian Topology Functions  Related Work  Design and Implementation of Topology Functions  Experimental Framework and Performance Results  Micro-benchmark Results  Applications Results  Concluding Remarks and Future Work

10 Ahmad Afsahi Parallel Processing Research Laboratory 9 Related Work (I)  Hatazaki, Träff, worked on topology mapping using graph embedding algorithms (Euro PVM/MPI 1998, SC 2002)  Träff et. al, proposed extending MPI-1 topology interface (HIPS 2003, Euro PVM/MPI 2006)  To support weighted-edge topologies and dynamic process reordering, and to  Provide architectural clues to the applications for a better mapping  MPI Forum introduced distributed topology functionality in MPI- 2.2 (2009)  Hoefler et. al, proposed guidelines for efficient implementation of distributed topology functionality (CCPE 2010)

11 Ahmad Afsahi Parallel Processing Research Laboratory 10 Related Work (II)  Mercier et. al, studied efficient process-to-core mapping (Euro PVM/MPI 2009, EuroPar 2010]  Using external libraries for node architecture discovery and graph mapping  Using weighted graphs and/or trees, and outside MPI topology interface  How is our work different from the related work?  Supports a physical topology spanning nodes and the network  Uses edge replication to support weighted edges in virtual topology graphs  Integrates the above functionality in MPI non-distributed topology interface

12 Ahmad Afsahi Parallel Processing Research Laboratory 11 Presentation Outline  Introduction  Background and Motivation  MPI Graph and Cartesian Topology Functions  Related Work  Design and Implementation of Topology Functions  Experimental Framework and Performance Results  Micro-benchmark Results  Applications Results  Concluding Remarks and Future Work

13 Ahmad Afsahi Parallel Processing Research Laboratory 12 Design of MPI Topology Functions (I)  Both Cartesian and graph interfaces are treated as graph at the underlying layers  Cartesian topology is internally copied to a graph topology  Virtual topology graph:  Vertices: MPI processes  Edges: existence, or significance, of communication between any two processes  Significance of communication : normalized total communication volume between any pair of processes, used as edge weights  Edge replication is used to represent graph edge weight o Recap: MPI non-distributed interface does not support weighted edges

14 Ahmad Afsahi Parallel Processing Research Laboratory 13 Design of MPI Topology Functions (II)  Physical topology graph:  Integrated node and network architecture  Vertices: architectural components such as: o Network nodes o Cores o Caches  Edges: communication links between the components  Edge weights: communication performance between components o Processor cores: closer cores have higher edge weight o Network nodes: closer nodes have higher edge weight o Farthest on-node cores get higher weight than closest network nodes

15 Ahmad Afsahi Parallel Processing Research Laboratory 14 Physical Topology Distance Example  d1 will have the highest load value in the graph.  The path between N2 and N3 (d4) will have the lowest load value, indicating the lowest performance path.  d1 > d2 > d3 > d4 = 1

16 Ahmad Afsahi Parallel Processing Research Laboratory 15 Tools for Implementation of Topology Functions  HWLOC library for extracting node architecture:  A tree architecture, with nodes at top level and cores at the leaves  Cores with lower-level parents (such as caches) are considered to have higher communication performance  IB subnet manager (ibtracert) for extracting network distances:  Do the discovery offline, before the application run  Make a pre-discovered network distance file  Scotch library for mapping virtual to physical topologies:  Source and target graphs are weighted and undirected  Uses recursive bi-partitioning for graph mapping

17 Ahmad Afsahi Parallel Processing Research Laboratory 16 Implementation of Topology Functions  Communication pattern profiling:  Probes are placed inside MPI library to profile applications’ communication pattern.  Pairwise communication volume is normalized in the range of 0...10, with 0 meaning no edge between the two vertices.  All processes perform node architecture discovery  One process performs network discovery for all  Make the physical architecture view unified across the processes (using Allgather)

18 Ahmad Afsahi Parallel Processing Research Laboratory 17 Existing MPICH function Graph topology Graph topology initialization Creating physical topology: by extracting and merging node and network architectures. 1. Initialize Scotch architecture. 2. Extract network topology (if required). 3. Extract node topology. 4. Merge node and network topology. 5. Distribute the merged topology among processes (using allgather). 6. Build Scotch physical topology. Constructing a new reordered communicator: using Scotch mapping of the previous step. SCOTCH HWLOC Cartesian topology Trivial graph topology creation Trivial Cartesian topology creation Cartesian topology initialization No Reorder Reorder SCOTCH Graph mapping: by constructing Scotch weighted virtual topology from the input graph and mapping it to the extracted physical topology. 1. Initialize and build the Scotch virtual topology graph. 2. Initialize the mapping algorithms’ strategy in Scotch. 3. Map the virtual topology graph to the extracted physical topology. Creating the new MPI communicator IB Subnet manager Flow of Functionalities Creating equivalent graph topology Application profiling Input virtual topology graph New function added to MPICH External library utilized Calling a function Following a function in the code

19 Ahmad Afsahi Parallel Processing Research Laboratory 18 Presentation Outline  Introduction  Background and Motivation  MPI Graph and Cartesian Topology Functions  Related Work  Design and Implementation of Topology Functions  Experimental Framework and Performance Results  Micro-benchmark Results  Applications Results  Concluding Remarks and Future Work

20 Ahmad Afsahi Parallel Processing Research Laboratory 19 Experimental Framework  Cluster A (4 servers, 32-cores total)  Hosts: 2-way quad-core AMD Opteron 2350 servers, with 2MB shard L3 cache per processor, and 8GB RAM  Network: QDR InfiniBand, 3 switches at 2 levels  Software: Fedora 12, Kernel 2.6.27, MVAPICH2 1.5, OFED 1.5.2  Cluster B (16 servers, 192 cores total)  Hosts: 2-way hexa-core Intel Xeon X5670 servers, with a 12MB multi-level cache per processor, and 24GB RAM  Network: QDR InfiniBand, 4 switches at 2 levels  Software: RHEL 5, Kernel 2.6.18.94, MVAPICH2 1.5, OFED 1.5.2

21 Ahmad Afsahi Parallel Processing Research Laboratory 20 MPI Applications – Some Statistics MPI ApplicationCommunication Primitives NPB CG - MPI Send/Irecv: ~100% of the calls - MPI Barrier: ~0% of the calls NPB MG - MPI Send/Irecv: 98.5% of the calls, ~100% of the volume - MPI Allreduce, Reduce, Barrier, Bcast: 1.5% of the calls, ~0.002% of the volume LAMMPS - MPI Send/Recv/Irecv/Sendrecv: 95% of the calls, 99% of the volume - MPI Allreduce, Reduce, Barrier, Bcast, Scatter, Allgather, Allgatherv: 5% of the calls, 1% of the volume

22 Ahmad Afsahi Parallel Processing Research Laboratory 21 Exchange Micro-benchmark: Topology-aware Mapping Improvement over Block Mapping (%)

23 Ahmad Afsahi Parallel Processing Research Laboratory 22 Exchange Micro-benchmark: Topology-aware Mapping Improvement over Block Mapping (%)

24 Ahmad Afsahi Parallel Processing Research Laboratory 23 Collective Micro-benchmark: Topology-aware Mapping Improvement over Block Mapping (%)

25 Ahmad Afsahi Parallel Processing Research Laboratory 24 Applications: Topology-aware Mapping Improvement over Cyclic Mapping (%) 32-core cluster A

26 Ahmad Afsahi Parallel Processing Research Laboratory 25 Applications: Topology-aware Mapping Improvement over Block Mapping (%) 32-core cluster A

27 Ahmad Afsahi Parallel Processing Research Laboratory 26 Applications: Topology-aware Mapping Improvement over Cyclic Mapping (%) 128-core cluster B

28 Ahmad Afsahi Parallel Processing Research Laboratory 27 Applications: Topology-aware Mapping Improvement over Block Mapping (%) 128-core cluster B

29 Ahmad Afsahi Parallel Processing Research Laboratory 28 Communicator Creation time in MPI_Graph_create for LAMMPS System# ProcessesTrivial (ms) Non-weighted Graph (ms) Weighted Graph (ms) Network-aware Graph (ms) Cluster A 80.37.3 7.9 160.37.67.78.1 320.58.68.79 Cluster B 160.95.75.96.6 321.26.4 7.2 642.599.410.1 1284.718.818.919.4

30 Ahmad Afsahi Parallel Processing Research Laboratory 29 Presentation Outline  Introduction  Background and Motivation  MPI Graph and Cartesian Topology Functions  Related Work  Design and Implementation of Topology Functions  Experimental Framework and Performance Results  Micro-benchmark Results  Applications Results  Concluding Remarks and Future Work

31 Ahmad Afsahi Parallel Processing Research Laboratory 30 Concluding Remarks  We presented design and implementation of MPI non-distributed graph and Cartesian functions in MVAPICH2 for multi-core nodes connected through multi-level InfiniBand networks.  The micro-benchmarks showed that the effect of reordering process ranks can be significant, and when the communication is heavier on one dimension the benefits of using weighted and network-aware graphs (instead of non-weighted graph) are considerable.  We also modified MPI applications with MPI_Graph_create. The evaluation results showed that MPI applications can benefit from topology-aware MPI_Graph_create.

32 Ahmad Afsahi Parallel Processing Research Laboratory 31 Future Work  We intend to evaluate the effect of topology awareness on other MPI applications.  We would also like to run our applications on a larger testbed.  We would like to design a more general communication cost/weight model for graph mapping, and use other libraries.  We also intend to design and implement MPI distributed topology functions for more scalability in a more distributed, scalable fashion.

33 Ahmad Afsahi Parallel Processing Research Laboratory 32 Acknowledgment

34 Ahmad Afsahi Parallel Processing Research Laboratory 33 Thank you! Contacts:  Mohammad Javad Rashti: mohammad.rashti@queensu.camohammad.rashti@queensu.ca  Jonathan Green: jonathan.green@queensu.cajonathan.green@queensu.ca  Pavan Balaji: balaji@mcs.anl.govbalaji@mcs.anl.gov  Ahmad Afsahi: ahmad.afsahi@queensu.caahmad.afsahi@queensu.ca  William D. Gropp: wgropp@illinois.eduwgropp@illinois.edu

35 Ahmad Afsahi Parallel Processing Research Laboratory 34 Backup Slides

36 Ahmad Afsahi Parallel Processing Research Laboratory 35 MPI_Graph_create MPIR_Graph_create_reorder MPIU_Get_scotch_arch MPIR_Comm_copy_reorder SCOTCH HWLOC MPI_Cart_create MPIR_Graph_create MPIR_Cart_create_reorder MPIR_Topo_create No Reorder Reorder SCOTCH_Graph_build/map MPIR_Comm_copy Scotch mapping Legend Existing MPICH function New function added to MPICH External library utilized IB Subnet manager Calling a function Following a function in the code Flow of function calls in MVAPICH code


Download ppt "Multi-core and Network Aware MPI Topology Functions Mohammad J. Rashti, Jonathan Green, Pavan Balaji, Ahmad Afsahi, and William D. Gropp Department of."

Similar presentations


Ads by Google