Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 25 ) 2 xE.ppt.

Similar presentations

Presentation on theme: "1 25 ) 2 xE.ppt."— Presentation transcript:

1 1 25 )

2 2 xE.ppt (253 slides, 13MB) ndix_f.pdf (P.118, 2MB) TA:

3 3 References T. M. Pinkston and J. Duato: Interconnection Networks, Appendix E in Computer Architecture: A Quantitative Approach, 4 th Edition, Morgan Kaufmann publishers (2006). 5 th Edition, Morgan Kaufmann publishers (2011). J. Duato, S. Yalamanchili, L. Ni: Interconnection Networks- an Engineering Approach-, 2, Morgan Kaufmann publishers (2003) 1996 W.D. Dally, B. Towles: Principles and Practices of Interconnection Networks, Morgan Kaufmann publishers (2003)

4 4 What is an interconnection Network? It is a programmable system that transports data between terminals, such as processors and memory. It is programmable in the sense that it makes different connections at different points. It is a system because it is composed of many components: buffers, channels, switches, and controls that works together to deliver data.

5 5 Interconnection Network (1/2) P M Interconnection Network Multicomputer P M P M

6 6 Interconnection Network (2/2) P M Interconnection Network UMA type shared memory multiprocessor It is also called dance-hall architecture. P M P M

7 7 Trend Its performance is increasing with processor performance at a rate of 50% per year. Communication is a limiting factor in the performance of many modern systems. Buses have been unable to keep up with the bandwidth demand, and point-to-point interconnection networks are rapidly taking over.

8 8 Computer Classifications (%) 2013/062012/062011/06 MPP16.618.617.4 Cluster83.481.482.2 Others 0.0 0.4 share of the TOP500 June, 2013 – June, 2011

9 9 Examples of clusters Processors AcceleratorInterconnect Tianhe-2 ( 2 China 2013 Intel Xeon E5-2692 12C 2.2 GHz×2 ×16K Xeon Phi 31S1P (57 cores)×3 ×16K TH Express-2 (proprietary) Fat tree Tsubame 2.5 Tokyo Tech. 2013 Xeon X5670 2.93GHz×2 ×1,408 NVIDIA Kepler K20x ×3×1,048 Infiniband QDR (40Gbps) ×2 Fat tree

10 10 Examples of MPPs NodeTopology #core Rmax K computer @RIKEN Fujitsu 2011 SPARC64 VIIIfx 2 GHz (16 GFlops× 8 cores) 6D mesh/ 3D torus Tofu interconnect 80K-node x 8-core = 640K-core 10.51 PFlops 7,890 KW Titan@ORNL Cray XK7 2012 AMD Opteron 16C 2.2 GHz + NVIDIA K20x 3D torus Gemini interconnect 18,688 nodes (200 Cabinets) 27.11 PFlops 8,209 KW

11 11 Other Networks of Supercomputers Sequoia (2011): 5D torus, proprietary IBM SeaStar Pleiades / NASA (2011): partial 11D hypercube topology with IB QDR/DDR Red Sky/ Sandia National Lab. (2010): 3D torus (12 bristled node) with IB QDRswitches IBM Roadrunner (2009): fat-tree with IB DDR Earth Simulator2 / NEC SX-9E (2009): Fat-Tree (64GB/s/cpu, 8-CPU/node, 160 nodes) IBM Blue Gene/L (2004): 3D torus proprietary (64 x 32 x 32 = 64K nodes)

12 12 Architecture vs. software memoryprogramming UMA (SMP)sharedOpenMP NUMA (MPP) distributed (not shared) MPI (Message Passing Interface)

13 13 Network Design (1/3) Performance: latency and throughput (bandwidth) Scalability: #processors vs. network, memory, I/O bandwidth Incremental expandability: small to maximum size Partitionability: netwrok may be partitioned for several users

14 14 Network Design (2/3) Simplicity: simple design, higher clock frequency, easy to use Distance span: smaller system is preferred for noise and cable delay, etc. Physical constraints: packaging (pin count), wiring(wire length), and maintenance (power consumption) should meet physical limitation.

15 15 Network Design (3/3) Reliability: fault tolerant, reliable communication, hot swap Expected workload: robust performance over a wade range of traffic conditions. Cost: trade-offs between cost and performance.

16 16 Classifiction of Interconnection Networks Shared-Medium Networks –Local area networks (ethernet, token ring) –Backplane bus (e.g. SUN Gigaplane) Direct Networks (router-based) –mesh, torus, hypercube, tree, … etc. Indirect Networks (switch-based) Hybrid Networks

17 17 Shared-Medium Networks (LAN) Arbitration that determines the mastership of the shared-medium network to resolve network access is needed. The most well-known protocol is carrier-sense multiple access with collision detection (CSMA/CD). Token bus and token ring pass a token from the owner which has the right to access the bus/ring and resolve nondeterministic waiting time.

18 18 Shared-Medium Networks (Backplane bus) It is commonly used to interconnect processor(s) and memory modules to provide SMP (Symmetrical Memory Processor) architecture. It is realized by printed lines on a circuit board by discrete wiring. Gigaplane in SUN Enterprise x000 server(1996): 2.6GB/s, 256 bits data, 42 bits address, 83.8MHz clock.

19 19 Direct (static) Networks Consists of a set of nodes. Each node is directly connected to a subset of other nodes in the network. Examples: –2D mesh (intel Paragon), 3D mesh (MIT J-Mahine) –2D torus (Fujitsu AP3000), 3D torus (Cray T3D, T3E) –Hypercube (CM1, CM2, nCUBE)

20 20 Mesh topology 2D 3D node

21 21 Torus topology 2D (4-ary 2-cube) 3D (3-ary 3-cube)

22 22 Hypercube (binary n-cube) 4D (2-ary 4-cube)

23 23 tree Binary treefat tree x tree

24 24 Hierarchical topology (1/2) Pyramid (Hierarchical 2D mesh) Hierarchical ring

25 25 Hierarchical topology (2/2) Cube-connected cycles RDT (Recursive Diagonal Torus)

26 26 Hypermesh (spaninng-bus hypercube) Single or multiple buses

27 27 Base-m n-cube (hyper-crossbar) Base-8 3-cube (Toshiba Prodigy) 000007 070077 707 777 770 8x8 crossbar

28 28 Diameter and degrees (1/2) 2D mesh 2D torus 3D torus binary n-cube #node N = 2 n Diameter N N N log N degree log N

29 29 Diameter and degrees (2/2) Base-m n-cube CCCBinary tree ring #node N = m n = n2 n Diameter log m N3n/22 log NN/2 degree log m N332

Download ppt "1 25 ) 2 xE.ppt."

Similar presentations

Ads by Google