1 Lecture 17: On-Chip Networks Today: background wrap-up and innovations.

Slides:



Advertisements
Similar presentations
Prof. Natalie Enright Jerger
Advertisements

A Novel 3D Layer-Multiplexed On-Chip Network
Flattened Butterfly Topology for On-Chip Networks John Kim, James Balfour, and William J. Dally Presented by Jun Pang.
Evaluating Bufferless Flow Control for On-Chip Networks George Michelogiannakis, Daniel Sanchez, William J. Dally, Christos Kozyrakis Stanford University.
What is Flow Control ? Flow Control determines how a network resources, such as channel bandwidth, buffer capacity and control state are allocated to packet.
1 Lecture 12: Interconnection Networks Topics: dimension/arity, routing, deadlock, flow control.
1 Lecture 15: PCM, Networks Today: PCM wrap-up, projects discussion, on-chip networks background.
1 Lecture 23: Interconnection Networks Paper: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton.
1 Lecture 16: On-Chip Networks Today: on-chip networks background.
Lei Wang, Yuho Jin, Hyungjun Kim and Eun Jung Kim
Design of a High-Throughput Distributed Shared-Buffer NoC Router
1 Lecture 21: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
1 Lecture 13: Interconnection Networks Topics: flow control, router pipelines, case studies.
1 Lecture 25: Interconnection Networks Topics: flow control, router microarchitecture Final exam:  Dec 4 th 9am – 10:40am  ~15-20% on pre-midterm  post-midterm:
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control Final exam reminders:  Plan well – attempt every question.
CSE 291-a Interconnection Networks Lecture 15: Router (cont’d) March 5, 2007 Prof. Chung-Kuan Cheng CSE Dept, UC San Diego Winter 2007 Transcribed by Ling.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
1 Lecture 25: Interconnection Networks, Disks Topics: flow control, router microarchitecture, RAID.
1 Lecture 24: Interconnection Networks Topics: topologies, routing, deadlocks, flow control.
1 Lecture 26: Interconnection Networks Topics: flow control, router microarchitecture.
1 Lecture 13: Interconnection Networks Topics: lots of background, recent innovations for power and performance.
1 Lecture 25: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix E) Review session,
Low-Latency Virtual-Channel Routers for On-Chip Networks Robert Mullins, Andrew West, Simon Moore Presented by Sailesh Kumar.
Performance and Power Efficient On-Chip Communication Using Adaptive Virtual Point-to-Point Connections M. Modarressi, H. Sarbazi-Azad, and A. Tavakkol.
Switching, routing, and flow control in interconnection networks.
McRouter: Multicast within a Router for High Performance NoCs
High Performance Embedded Computing © 2007 Elsevier Lecture 16: Interconnection Networks Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte.
1 Lecture 23: Interconnection Networks Topics: Router microarchitecture, topologies Final exam next Tuesday: same rules as the first midterm Next semester:
José Vicente Escamilla José Flich Pedro Javier García 1.
Networks-on-Chips (NoCs) Basics
International Symposium on Low Power Electronics and Design NoC Frequency Scaling with Flexible- Pipeline Routers Pingqiang Zhou, Jieming Yin, Antonia.
Déjà Vu Switching for Multiplane NoCs NOCS’12 University of Pittsburgh Ahmed Abousamra Rami MelhemAlex Jones.
SMART: A Single- Cycle Reconfigurable NoC for SoC Applications -Jyoti Wadhwani Chia-Hsin Owen Chen, Sunghyun Park, Tushar Krishna, Suvinay Subramaniam,
1 Lecture 7: Interconnection Network Part I: Basic Definitions Part II: Message Passing Multicomputers.
The Alpha Network Architecture By Shubhendu S. Mukherjee, Peter Bannon Steven Lang, Aaron Spink, and David Webb Compaq Computer Corporation Presented.
Dynamic Interconnect Lecture 5. COEN Multistage Network--Omega Network Motivation: simulate crossbar network but with fewer links Components: –N.
Author : Jing Lin, Xiaola Lin, Liang Tang Publish Journal of parallel and Distributed Computing MAKING-A-STOP: A NEW BUFFERLESS ROUTING ALGORITHM FOR ON-CHIP.
George Michelogiannakis William J. Dally Stanford University Router Designs for Elastic- Buffer On-Chip Networks.
1 Lecture 26: Networks, Storage Topics: router microarchitecture, disks, RAID (Appendix D) Final exam: Monday 30 th Apr 10:30-12:30 Same rules as the midterm.
CS 8501 Networks-on-Chip (NoCs) Lukasz Szafaryn 15 FEB 10.
© Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Switch Microarchitecture Basics.
1 Lecture 15: Interconnection Routing Topics: deadlock, flow control.
BZUPAGES.COM Presentation On SWITCHING TECHNIQUE Presented To; Sir Taimoor Presented By; Beenish Jahangir 07_04 Uzma Noreen 07_08 Tayyaba Jahangir 07_33.
Runtime Power Gating of On-Chip Routers Using Look-Ahead Routing
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
Lecture 16: Router Design
Ch 8. Switching. Switch  Devices that interconnected with each other  Connecting all nodes (like mesh network) is not cost-effective  Some topology.
1 Lecture 15: NoC Innovations Today: power and performance innovations for NoCs.
1 Lecture 22: Router Design Papers: Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton A Gracefully Degrading and.
Virtual-Channel Flow Control William J. Dally
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches, routing, deadlocks (Appendix F)
1 Switching and Forwarding Sections Connecting More Than Two Hosts Multi-access link: Ethernet, wireless –Single physical link, shared by multiple.
Predictive High-Performance Architecture Research Mavens (PHARM), Department of ECE The NoX Router Mitchell Hayenga Mikko Lipasti.
1 Lecture 29: Interconnection Networks Papers: Express Virtual Channels: Towards the Ideal Interconnection Fabric, ISCA’07, Princeton Interconnect Design.
1 Lecture 22: Interconnection Networks Topics: Routing, deadlock, flow control, virtual channels.
The network-on-chip protocol
Lecture 23: Interconnection Networks
Physical constraints (1/2)
Lecture 14: Large Cache Design III
Lecture 23: Router Design
Lecture 16: On-Chip Networks
NoC Switch: Basic Design Principles &
Lecture 17: NoC Innovations
Lecture: Interconnection Networks
Low-Latency Virtual-Channel Routers for On-Chip Networks Robert Mullins, Andrew West, Simon Moore Presented by Sailesh Kumar.
Lecture: Networks Topics: TM wrap-up, networks.
Lecture: Interconnection Networks
Lecture 25: Interconnection Networks
Multiprocessors and Multi-computers
Presentation transcript:

1 Lecture 17: On-Chip Networks Today: background wrap-up and innovations

2 Router Pipeline Four typical stages:  RC routing computation: the head flit indicates the VC that it belongs to, the VC state is updated, the headers are examined and the next output channel is computed (note: this is done for all the head flits arriving on various input channels)  VA virtual-channel allocation: the head flits compete for the available virtual channels on their computed output channels  SA switch allocation: a flit competes for access to its output physical channel  ST switch traversal: the flit is transmitted on the output channel A head flit goes through all four stages, the other flits do nothing in the first two stages (this is an in-order pipeline and flits can not jump ahead), a tail flit also de-allocates the VC

3 Speculative Pipelines Perform VA and SA in parallel Note that SA only requires knowledge of the output physical channel, not the VC If VA fails, the successfully allocated channel goes un-utilized RC VA SA ST --SAST --SAST --SAST Cycle Head flit Body flit 1 Body flit 2 Tail flit Perform VA, SA, and ST in parallel (can cause collisions and re-tries) Typically, VA is the critical path – can possibly perform SA and ST sequentially Router pipeline latency is a greater bottleneck when there is little contention When there is little contention, speculation will likely work well! Single stage pipeline? RC VA SA ST

4 Alpha Pipeline RCTDW SA1 WrQ RE SA2 ST1 ST2ECC Routing Transport/ Wire delay Update of input unit state Write to input queues Switch allocation – local Switch allocation – global Switch traversal Append ECC information

5 Recent Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006 Used for a 6x6 mesh 16 B, > 3 GHz Wormhole with VC flow control

6 Recent Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006

7 Recent Intel Router Source: Partha Kundu, “On-Die Interconnects for Next-Generation CMPs”, talk at On-Chip Interconnection Networks Workshop, Dec 2006

8 Data Points On-chip network’s power contribution in RAW (tiled) processor: 36% in network of compute-bound elements (Intel): 20% in network of storage elements (Intel): 36% bus-based coherence (Kumar et al. ’05): ~12% Polaris (Intel) network: 28% SCC (Intel) network: 10% Power contributors: RAW: links 39%; buffers 31%; crossbar 30% TRIPS: links 31%; buffers 35%; crossbar 33% Intel: links 18%; buffers 38%; crossbar 29%; clock 13%

9 Network Power Power-Driven Design of Router Microarchitectures in On-Chip Networks, MICRO’03, Princeton Energy for a flit = E R. H + E wire. D = (E buf + E xbar + E arb ). H + E wire. D E R = router energy H = number of hops E wire = wire transmission energy D = physical Manhattan distance E buf = router buffer energy E xbar = router crossbar energy E arb = router arbiter energy This paper assumes that E wire. D is ideal network energy (assuming no change to the application and how it is mapped to physical nodes)

10 Segmented Crossbar By segmenting the row and column lines, parts of these lines need not switch  less switching capacitance (especially if your output and input ports are close to the bottom-left in the figure above) Need a few additional control signals to activate the tri-state buffers Overall crossbar power savings: ~15-30%

11 Cut-Through Crossbar Attempts to optimize the common case: in dimension-order routing, flits make up to one turn and usually travel straight 2/3 rd the number of tristate buffers and 1/2 the number of data wires “Straight” traffic does not go thru tristate buffers Some combinations of turns are not allowed: such as E  N and N  W (note that such a combination cannot happen with dimension-order routing) Crossbar energy savings of 39-52%

12 Write-Through Input Buffer Input flits must be buffered in case there is a conflict in a later pipeline stage If the queue is empty, the input flit can move straight to the next stage: helps avoid the buffer read To reduce the datapaths, the write bitlines can serve as the bypass path Power savings are a function of rd/wr energy ratios and probability of finding an empty queue

13 Express Channels Express channels connect non-adjacent nodes – flits traveling a long distance can use express channels for most of the way and navigate on local channels near the source/destination (like taking the freeway) Helps reduce the number of hops The router in each express node is much bigger now

14 Express Channels Routing: in a ring, there are 5 possible routes and the best is chosen; in a torus, there are 17 possible routes A large express interval results in fewer savings because fewer messages exercise the express channels

15 Express Virtual Channels To a large extent, maintain the same physical structure as a conventional network (changes to be explained shortly) Some virtual channels are treated differently: they go through a different router pipeline and can effectively avoid most router overheads

16 Router Pipelines If Normal VC (NVC):  at every router, must compete for the next VC and for the switch  will get buffered in case there is a conflict for VA/SA If EVC (at intermediate bypass router):  need not compete for VC (an EVC is a VC reserved across multiple routers)  similarly, the EVC is also guaranteed the switch (only 1 EVC can compete for an output physical channel)  since VA/SA are guaranteed to succeed, no need for buffering  simple router pipeline: incoming flit directly moves to ST stage If EVC (at EVC source/sink router):  must compete for VC/SA as in a conventional pipeline  before moving on, must confirm free buffer at next EVC router

17 Title Bullet