1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester:

Slides:



Advertisements
Similar presentations
Prof. Natalie Enright Jerger
Advertisements

A Novel 3D Layer-Multiplexed On-Chip Network
Flattened Butterfly: A Cost-Efficient Topology for High-Radix Networks ______________________________ John Kim, William J. Dally &Dennis Abts Presented.
What is Flow Control ? Flow Control determines how a network resources, such as channel bandwidth, buffer capacity and control state are allocated to packet.
1 Lecture 17: On-Chip Networks Today: background wrap-up and innovations.
1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 23: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Appendix E)
1 Lecture 15: PCM, Networks Today: PCM wrap-up, projects discussion, on-chip networks background.
1 Lecture 23: Interconnection Networks Paper: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton.
1 Lecture 16: On-Chip Networks Today: on-chip networks background.
NUMA Mult. CSE 471 Aut 011 Interconnection Networks for Multiprocessors Buses have limitations for scalability: –Physical (number of devices that can be.
1 Lecture 21: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 13: Interconnection Networks Topics: flow control, router pipelines, case studies.
1 Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture Final exam:  Dec 4 th 9am – 10:40am  ~15-20% on pre-midterm  post-midterm:
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control Final exam reminders:  Plan well – attempt every question.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
1 Lecture 25: Interconnection Networks, Disks Topics: flow control, router microarchitecture, RAID.
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 26: Interconnection Networks Topics: flow control, router microarchitecture.
1 Lecture 13: Interconnection Networks Topics: lots of background, recent innovations for power and performance.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
Performance and Power Efficient On-Chip Communication Using Adaptive Virtual Point-to-Point Connections M. Modarressi, H. Sarbazi-Azad, and A. Tavakkol.
Interconnect Network Topologies
Interconnection Networks. Applications of Interconnection Nets Interconnection networks are used everywhere! ◦ Supercomputers – connecting the processors.
Computer Science Department
Interconnect Networks
On-Chip Networks and Testing
Network Topologies Topology – how nodes are connected – where there is a wire between 2 nodes. Routing – the path a message takes to get from one node.
Networks-on-Chips (NoCs) Basics
PPC Spring Interconnection Networks1 CSCI-4320/6360: Parallel Programming & Computing (PPC) Interconnection Networks Prof. Chris Carothers Computer.
Dynamic Interconnect Lecture 5. COEN Multistage Network--Omega Network Motivation: simulate crossbar network but with fewer links Components: –N.
Author : Jing Lin, Xiaola Lin, Liang Tang Publish Journal of parallel and Distributed Computing MAKING-A-STOP: A NEW BUFFERLESS ROUTING ALGORITHM FOR ON-CHIP.
Course Wrap-Up Miodrag Bolic CEG4136. What was covered Interconnection network topologies and performance Shared-memory architectures Message passing.
Multiprocessor Interconnection Networks Todd C. Mowry CS 740 November 3, 2000 Topics Network design issues Network Topology.
Lecture 3 Innerconnection Networks for Parallel Computers
Anshul Kumar, CSE IITD CSL718 : Multiprocessors Interconnection Mechanisms Performance Models 20 th April, 2006.
1 Lecture 26: Networks, Storage Topics: router microarchitecture, disks, RAID (Appendix D) Final exam: Monday 30 th Apr 10:30-12:30 Same rules as the midterm.
© Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Switch Microarchitecture Basics.
1 Lecture 13: LRC & Interconnection Networks Topics: LRC implementation, interconnection characteristics.
1 Lecture 15: Interconnection Routing Topics: deadlock, flow control.
Anshul Kumar, CSE IITD ECE729 : Advanced Computer Architecture Lecture 27, 28: Interconnection Mechanisms In Multiprocessors 29 th, 31 st March, 2010.
Lecture 16: Router Design
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Super computers Parallel Processing
1 Lecture 15: NoC Innovations Today: power and performance innovations for NoCs.
1 Lecture: Networks, Disks, Datacenters, GPUs Topics: networks wrap-up, disks and reliability, datacenters, GPU intro (Sections , App D, Ch 4)
1 Lecture 22: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
Virtual-Channel Flow Control William J. Dally
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
1 Lecture 14: Interconnection Networks Topics: dimension vs. arity, deadlock.
Predictive High-Performance Architecture Research Mavens (PHARM), Department of ECE The NoX Router Mitchell Hayenga Mikko Lipasti.
Interconnection Networks Communications Among Processors.
Flow Control Ben Abdallah Abderazek The University of Aizu
1 Lecture 29: Interconnection Networks Papers: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton Interconnect Design.
1 Lecture 22: Interconnection Networks Topics: Routing, deadlock, flow control, virtual channels.
The network-on-chip protocol
Lecture 23: Interconnection Networks
Physical constraints (1/2)
Lecture 14: Large Cache Design III
Lecture 23: Router Design
Lecture 16: On-Chip Networks
Lecture 17: NoC Innovations
Lecture 14: Interconnection Networks
Lecture: Interconnection Networks
High Performance Computing & Bioinformatics Part 2 Dr. Imad Mahgoub
Lecture: Networks Topics: TM wrap-up, networks.
Lecture: Interconnection Networks
CS 6290 Many-core & Interconnect
Lecture 25: Interconnection Networks
Presentation transcript:

1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester: CS/EE 7810: Advanced Computer Arch, same time, similar topics but more in-depth treatment, project-intensive

2 Virtual Channel Flow Control Incoming flits are placed in buffers For this flit to jump to the next router, it must acquire three resources:  A free virtual channel on its intended hop  We know that a virtual channel is free when the tail flit goes through  Free buffer entries for that virtual channel  This is determined with credit or on/off management  A free cycle on the physical channel  Competition among the packets that share a physical channel

3 Buffer Management Credit-based: keep track of the number of free buffers in the downstream node; the downstream node sends back signals to increment the count when a buffer is freed; need enough buffers to hide the round-trip latency On/Off: the upstream node sends back a signal when its buffers are close to being full – reduces upstream signaling and counters, but can waste buffer space

4 Router Functions Crossbar, buffer, arbiter, VC state and allocation, buffer management, ALUs, control logic, routing Typical on-chip network power breakdown:  30% link  30% buffers  30% crossbar

5 Router Pipeline Four typical stages:  RC routing computation: the head flit indicates the VC that it belongs to, the VC state is updated, the headers are examined and the next output channel is computed (note: this is done for all the head flits arriving on various input channels)  VA virtual-channel allocation: the head flits compete for the available virtual channels on their computed output channels  SA switch allocation: a flit competes for access to its output physical channel  ST switch traversal: the flit is transmitted on the output channel A head flit goes through all four stages, the other flits do nothing in the first two stages (this is an in-order pipeline and flits can not jump ahead), a tail flit also de-allocates the VC

6 Router Pipeline Four typical stages:  RC routing computation: compute the output channel  VA virtual-channel allocation: allocate VC for the head flit  SA switch allocation: compete for output physical channel  ST switch traversal: transfer data on output physical channel RCVASAST -- SAST -- SAST -- SAST Cycle Head flit Body flit 1 Body flit 2 Tail flit RCVASAST -- SAST -- SAST -- SAST SA -- STALL

7 Speculative Pipelines Perform VA and SA in parallel Note that SA only requires knowledge of the output physical channel, not the VC If VA fails, the successfully allocated channel goes un-utilized RC VA SA ST --SAST --SAST --SAST Cycle Head flit Body flit 1 Body flit 2 Tail flit Perform VA, SA, and ST in parallel (can cause collisions and re-tries) Typically, VA is the critical path – can possibly perform SA and ST sequentially Router pipeline latency is a greater bottleneck when there is little contention When there is little contention, speculation will likely work well! Single stage pipeline? RC VA SA ST

8 Example Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006

9 Example Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006 Used for a 6x6 mesh 16 B, > 3 GHz Wormhole with VC flow control

10 Current Trends Growing interest in eliminating the area/power overheads of router buffers; traffic levels are also relatively low, so virtual-channel buffered routed networks may be overkill Option 1: use a bus for short distances (16 cores) and use a hierarchy of buses to travel long distances Option 2: hot-potato or bufferless routing

11 Centralized Crossbar Switch P1 P2 P3 P4 P5 P6 P7 P0

12 Crossbar Properties Assuming each node has one input and one output, a crossbar can provide maximum bandwidth: N messages can be sent as long as there are N unique sources and N unique destinations Maximum overhead: WN 2 internal switches, where W is data width and N is number of nodes To reduce overhead, use smaller switches as building blocks – trade off overhead for lower effective bandwidth

13 Switch with Omega Network P1 P2 P3 P4 P5 P6 P7 P

14 Omega Network Properties The switch complexity is now O(N log N) Contention increases: P0  P5 and P1  P7 cannot happen concurrently (this was possible in a crossbar) To deal with contention, can increase the number of levels (redundant paths) – by mirroring the network, we can route from P0 to P5 via N intermediate nodes, while increasing complexity by a factor of 2

15 Tree Network Complexity is O(N) Can yield low latencies when communicating with neighbors Can build a fat tree by having multiple incoming and outgoing links P0P3P2P1P4P7P6P5

16 Bisection Bandwidth Split N nodes into two groups of N/2 nodes such that the bandwidth between these two groups is minimum: that is the bisection bandwidth Why is it relevant: if traffic is completely random, the probability of a message going across the two halves is ½ – if all nodes send a message, the bisection bandwidth will have to be N/2 The concept of bisection bandwidth confirms that the tree network is not suited for random traffic patterns, but for localized traffic patterns

17 Distributed Switches: Ring Each node is connected to a 3x3 switch that routes messages between the node and its two neighbors Effectively a repeated bus: multiple messages in transit Disadvantage: bisection bandwidth of 2 and N/2 hops on average

18 Distributed Switch Options Performance can be increased by throwing more hardware at the problem: fully-connected switches: every switch is connected to every other switch: N 2 wiring complexity, N 2 /4 bisection bandwidth Most commercial designs adopt a point between the two extremes (ring and fully-connected):  Grid: each node connects with its N, E, W, S neighbors  Torus: connections wrap around  Hypercube: links between nodes whose binary names differ in a single bit

19 Topology Examples Grid Hypercube Torus Criteria 64 nodes BusRing2Dtorus6-cubeFully connected Performance Bisection bandwidth Cost Ports/switch Total links

20 Topology Examples Grid Hypercube Torus Criteria 64 nodes BusRing2Dtorus6-cubeFully connected Performance Bisection bandwidth Cost Ports/switch Total links

21 k-ary d-cube Consider a k-ary d-cube: a d-dimension array with k elements in each dimension, there are links between elements that differ in one dimension by 1 (mod k) Number of nodes N = k d Number of switches : Switch degree : Number of links : Pins per node : Avg. routing distance: Diameter : Bisection bandwidth : Switch complexity : Should we minimize or maximize dimension?

22 k-ary d-Cube Consider a k-ary d-cube: a d-dimension array with k elements in each dimension, there are links between elements that differ in one dimension by 1 (mod k) Number of nodes N = k d Number of switches : Switch degree : Number of links : Pins per node : Avg. routing distance: Diameter : Bisection bandwidth : Switch complexity : N 2d + 1 Nd 2wd d(k-1)/2 d(k-1) 2wk d-1 Should we minimize or maximize dimension? (2d + 1) 2 (with no wraparound)

23 Title Bullet