Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Router Microarchitecture Prof. Natalie Enright Jerger Winter 20101ECE 1749H: Interconnection.

Similar presentations


Presentation on theme: "ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Router Microarchitecture Prof. Natalie Enright Jerger Winter 20101ECE 1749H: Interconnection."— Presentation transcript:

1 ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Router Microarchitecture Prof. Natalie Enright Jerger Winter 20101ECE 1749H: Interconnection Networks (Enright Jerger)

2 Introduction Topology: connectivity Routing: paths Flow control: resource allocation Router Microarchitecture: implementation of routing, flow control and router pipeline – Impacts per-hop delay and energy Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)2

3 Router Microarchitecture Overview Focus on microarchitecture of Virtual Channel router – Router complexity increase with bandwidth demands – Simple routers built when high throughput is not needed Wormhole flow control, unpipelined, limited buffer Winter 20103ECE 1749H: Interconnection Networks (Enright Jerger)

4 Virtual Channel Router Route Computation VC Allocator Switch Allocator Input buffers VC 1 VC 2 VC 3 VC 4 Input buffers VC 1 VC 2 VC 3 VC 4 Crossbar switch Input 5 Input 1 Output 5 Output 1 Credits In Credits Out Winter 20104

5 Router Components Input buffers, route computation logic, virtual channel allocator, switch allocator, crossbar switch Most OCN routers are input buffered – Use single-ported memories Buffer store flits for duration in router – Contrast with processor pipeline that latches between stages Winter 20105ECE 1749H: Interconnection Networks (Enright Jerger)

6 Baseline Router Pipeline Logical stages – Fit into physical stages depending on frequency Canonical 5-stage pipeline – BW: Buffer Write – RC: Routing computation – VA: Virtual Channel Allocation – SA: Switch Allocation – ST: Switch Traversal – LT: Link Traversal BW RC VA SA ST LT Winter 20106ECE 1749H: Interconnection Networks (Enright Jerger)

7 Atomic Modules and Dependencies in Router Dependence between output of one module and input of another – Determine critical path through router – Cannot bid for switch port until routing performed Decode + Routing Switch Arbitration Crossbar Traversal Winter 20107ECE 1749H: Interconnection Networks (Enright Jerger) Wormhole Router Decode + Routing Switch Arbitration Crossbar Traversal Virtual Channel Router VC Allocation Decode + Routing Speculative Switch Arbitration Crossbar Traversal Speculative Virtual Channel Router VC Allocation

8 Atomic Modules Some components of router cannot be easily pipelined Example: pipeline VC allocation – Grants might not be correctly reflected before next allocation Separable allocator: many wires connecting input/output stages requiring latches if pipelined Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)8

9 Baseline Router Pipeline (2) Routing computation performed once per packet Virtual channel allocated once per packet Body and tail flits inherit this info from head flit BW RC VA SA ST LT BW SA ST LT SA ST LT SA ST LT Head Body 1 Body 2 Tail Winter 20109ECE 1749H: Interconnection Networks (Enright Jerger)

10 Router Pipeline Performance Baseline (no load) delay Ideally, only pay link delay Techniques to reduce pipeline stages BW NRC BW NRC VA SA ST LT Winter ECE 1749H: Interconnection Networks (Enright Jerger)

11 Pipeline Optimizations: Lookahead Routing At current router perform routing computation for next router – Overlap with BW – Precomputing route allows flits to compete for VCs immediately after BW – RC decodes route header – Routing computation needed at next hop Can be computed in parallel with VA Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)11 BW RC BW RC VA SA ST LT

12 Pipeline Optimizations: Speculation Assume that Virtual Channel Allocation stage will be successful – Valid under low to moderate loads Entire VA and SA in parallel If VA unsuccessful (no virtual channel returned) – Must repeat VA/SA in next cycle Prioritize non-speculative requests BW RC BW RC VA SA VA SA ST LT Winter ECE 1749H: Interconnection Networks (Enright Jerger)

13 Pipeline Optimizations: Bypassing When no flits in input buffer – Speculatively enter ST – On port conflict, speculation aborted – In the first stage, a free VC is allocated, next routing is performed and the crossbar is setup VA RC Setup VA RC Setup ST LT Winter ECE 1749H: Interconnection Networks (Enright Jerger)

14 Pipeline Bypassing No buffered flits when A arrives Inject N S E W Eject NSE W 1b 1a Lookahead Routing Computation Winter ECE 1749H: Interconnection Networks (Enright Jerger) A Virtual Channel Allocation

15 Speculation Inject N S E W Eject NSE W 1b 1a Lookahead Routing Computation 1 1 1c 1 1 1b 1c Virtual Channel Allocation Switch Allocation 2a 2b Winter ECE 1749H: Interconnection Networks (Enright Jerger) Port conflict detected 3 3 A succeeds in VA but fails in SA, retry SA B A

16 Buffer Organization Single buffer per input Multiple fixed length queues per physical channel Physical channels Virtual channels Winter ECE 1749H: Interconnection Networks (Enright Jerger)

17 Buffer Organization Multiple variable length queues – Multiple VCs share a large buffer – Each VC must have minimum 1 flit buffer Prevent deadlock – More complex circuitry VC 0 tail head VC 1 tail head Winter ECE 1749H: Interconnection Networks (Enright Jerger)

18 Buffer Organization Many shallow VCs? Few deep VCs? More VCs ease HOL blocking – More complex VC allocator Light traffic – Many shallow VCs – underutilized Heavy traffic – Few deep VCs – less efficient, packets blocked due to lack of VCs Winter ECE 1749H: Interconnection Networks (Enright Jerger)

19 Switch Organization Heart of datapath – Switches bits from input to output High frequency crossbar designs challenging Crossbar composed for many multiplexers – Common in low-frequency router designs i00 i10 i20i30i40 o0 sel0 o1 sel1 o2 sel2 o3 sel3 o4 sel4 Winter ECE 1749H: Interconnection Networks (Enright Jerger)

20 Switch Organization: Crosspoint Area and power scale at O((pw) 2 ) – p: number of ports (function of topology) – w: port width in bits (determines phit/flit size and impacts packet energy and delay) Inject N S E W Eject NSE W w columns w rows Winter ECE 1749H: Interconnection Networks (Enright Jerger)

21 Crossbar speedup Increase internal switch bandwidth Simplifies allocation or gives better performance with a simple allocator – More inputs to select from  higher probability each output port will be matched (used) each cycle Output speedup requires output buffers – Multiplex onto physical link 10:5 crossbar 5:10 crossbar 10:10 crossbar Winter ECE 1749H: Interconnection Networks (Enright Jerger)

22 Crossbar Dimension Slicing Crossbar area and power grow with O((pw) 2 ) Replace 1 5x5 crossbar with 2 3x3 crossbars Suited to DOR – Traffic mostly stays within 1 dimension Inject E-in W-in E-out W-out N-in S-in N-out S-out Eject Winter ECE 1749H: Interconnection Networks (Enright Jerger)

23 Arbiters and Allocators Allocator matches N requests to M resources Arbiter matches N requests to 1 resource Resources are VCs (for virtual channel routers) and crossbar switch ports. Winter ECE 1749H: Interconnection Networks (Enright Jerger)

24 Arbiters and Allocators (2) Virtual-channel allocator (VA) – Resolves contention for output virtual channels – Grants them to input virtual channels Switch allocator (SA) that grants crossbar switch ports to input virtual channels Allocator/arbiter that delivers high matching probability translates to higher network throughput. – Must also be fast and/or able to be pipelined Winter ECE 1749H: Interconnection Networks (Enright Jerger)

25 Round Robin Arbiter Last request serviced given lowest priority Generate the next priority vector from current grant vector Exhibits fairness Winter ECE 1749H: Interconnection Networks (Enright Jerger)

26 Round Robin (2) G i granted, next cycle P i+1 high Grant 0 Grant 1 Grant 2 Next priority 0 Next priority 1 Next priority 2 Priority 0 Priority 1 Priority 2 Winter ECE 1749H: Interconnection Networks (Enright Jerger)

27 Matrix Arbiter Least recently served priority scheme Triangular array of state bits w ij for i < j – Bit w ij indicates request i takes priority over j – Each time request k granted, clears all bits in row k and sets all bits in column k Good for small number of inputs Fast, inexpensive and provides strong fairness Winter ECE 1749H: Interconnection Networks (Enright Jerger)

28 Matrix Arbiter (2) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant 2 Winter ECE 1749H: Interconnection Networks (Enright Jerger)

29 Matrix Arbiter Example Winter ECE 1749H: Interconnection Networks (Enright Jerger) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant A1A1 A2A2 B1B1 C1C1 C2C2 Bit [1,0] = 1, Bit [2,0] = 1  1 and 2 have priority over 0 Bit [2,1] = 1  2 has priority over 1 C 1 (Req 2) granted Bit [1,0] = 1, Bit [2,0] = 1  1 and 2 have priority over 0 Bit [2,1] = 1  2 has priority over 1 C 1 (Req 2) granted

30 Matrix Arbiter Example (2) Winter ECE 1749H: Interconnection Networks (Enright Jerger) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant A1A1 A2A2 B1B1 C2C2 Set column 2, clear row 2 Bit [1,0] = 1, Bit [1,2] = 1  Req 1 has priority over 0 and 2 Grant B 1 (Req 1) Set column 2, clear row 2 Bit [1,0] = 1, Bit [1,2] = 1  Req 1 has priority over 0 and 2 Grant B 1 (Req 1)

31 Matrix Arbiter Example (3) Winter ECE 1749H: Interconnection Networks (Enright Jerger) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant A1A1 A2A2 C2C2 Set column 1, clear row 1 Bit [0,1] = 1, Bit [0,2] = 1  Req 0 has priority over 1 and 2 Grant A 1 (Req 0) Set column 1, clear row 1 Bit [0,1] = 1, Bit [0,2] = 1  Req 0 has priority over 1 and 2 Grant A 1 (Req 0)

32 Matrix Arbiter Example (4) Winter ECE 1749H: Interconnection Networks (Enright Jerger) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant A2A2 C2C2 Set column 0, clear row 0 Bit [2,0] = 1, Bit [2,1] = 1  Req 2 has priority over 0 and 1 Grant C 2 (Req 2) Set column 0, clear row 0 Bit [2,0] = 1, Bit [2,1] = 1  Req 2 has priority over 0 and 1 Grant C 2 (Req 2)

33 Matrix Arbiter Example (5) Winter ECE 1749H: Interconnection Networks (Enright Jerger) Request Request Grant 0 Disable 2 Disable 1 Disable 0 Request 2 Grant 1 Grant A2A2 Set column 2, clear row 2 Grant Request A 2 Set column 2, clear row 2 Grant Request A 2

34 Wavefront Allocator Arbitrates among requests for inputs and outputs simultaneously Row and column tokens granted to diagonal group of cells If a cell is requesting a resource, it will consume row and column tokens – Request is granted Cells that cannot use tokens pass row tokens to right and column tokens down Winter ECE 1749H: Interconnection Networks (Enright Jerger)

35 Wavefront Allocator Example p0 p1 p2 p Winter ECE 1749H: Interconnection Networks (Enright Jerger) A requesting Resources 0, 1,2 B requesting Resources 0, 1 C requesting Resource 0 D requesting Resources 0, 2 Tokens inserted at P0 Entry [0,0] receives grant, consumes token Remaining tokens pass down and right [3,2] receives 2 tokens and is granted

36 Wavefront Allocator Example p0 p1 p2 p Winter ECE 1749H: Interconnection Networks (Enright Jerger) [1,1] receives 2 tokens and granted All wavefronts propagated

37 Separable Allocator Need for pipelineable allocators Allocator composed of arbiters – Arbiter chooses one out of N requests to a single resource Separable switch allocator – First stage: select single request at each input port – Second stage: selects single request for each output port Winter ECE 1749H: Interconnection Networks (Enright Jerger)

38 Separable Allocator A 3:4 allocator First stage: 3:1 – ensures only one grant for each input Second stage: 4:1 – only one grant asserted for each output 3:1 arbiter 4:1 arbiter Requestor 1 requesting resource A Requestor 1 requesting resource C Requestor 4 requesting resource A Resource A granted to Requestor 1 Resource A granted to Requestor 2 Resource A granted to Requestor 3 Resource A granted to Requestor 4 Resource C granted to Requestor 1 Resource C granted to Requestor 4 Resource C granted to Requestor 2 Resource C granted to Requestor 3 Winter ECE 1749H: Interconnection Networks (Enright Jerger)

39 Separable Allocator Example 4 requestors, 3 resources Arbitrate locally among requests – Local winners passed to second stage 3:1 arbiter 4:1 arbiter Winter ECE 1749H: Interconnection Networks (Enright Jerger) A A B B C C A A B B A A A A C C Requestor 1 wins A Requestor 4 wins C

40 Virtual Channel Allocator Organization Depends on routing function – If routing function returns single VC VCA need to arbitrate between input VCs contending for same output VC – Returns multiple candidate VCs (for same physical channel) – Needs to arbitrate among v first stage requests before forwarding winning request to second stage) Winter ECE 1749H: Interconnection Networks (Enright Jerger)

41 Virtual Channel Allocators If routing function returns single virtual channel – Need p i v:1 arbiter for each output virtual channel (p o v) Arbitrate among input VCs competing for same output VC Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)41 p i v:1 arbiter 1 p i v:1 arbiter p o v

42 Virtual Channel Allocators Routing function returns VCs on a single physical channel – First stage of v:1 arbiters for each input VC – Second stage p i v:1 arbiters for each output VC Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)42 v:1 arbiter 1 v:1 arbiter p o 1 v:1 arbiter 1 v:1 arbiter p o pipi p i v:1 arbiter 1 p i v:1 arbiter p o v

43 Virtual Channel Allocators Routing function returns candidate VCs on any physical channel – First stage: p o v:1 arbiter to handle max p o v output VCs desired by each input VC – Second stage: p i v:1 for each output VC Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)43 p o v:1 arbiter 1 p o v:1 arbiter p i v p i v:1 arbiter 1 p i v:1 arbiter p o v

44 Adaptive Routing & Allocator Design Deterministic routing – Single output port – Switch allocator bids for output port Adaptive routing – Returns multiple candidate output ports Switch allocator can bid for all ports Granted port must match VC granted – Return single output port Reroute if packet fails VC allocation Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)44

45 Separable Switch Allocator First stage: – P i v:1 arbiters For each P i input, select among v input virtual channels Second stage: – P o p i :1 arbiters Winners of v:1 arbiters select output port request of winning VC Forward output port request to p i :1 arbiters Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)45

46 Speculative VC Router Non-speculative switch requests must have higher priority than speculative ones – Two parallel switch allocators 1 for speculative 1 for non-speculative From output, choose non-speculative over speculative – Possible for flit to succeed in speculative switch allocation but fail in virtual channel allocation Done in parallel Speculation incorrect – Switch reservation is wasted – Body and Tail flits: non-speculative switch requests Do not perform VC allocation  inherit VC from head flit Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)46

47 Router Floorplanning Determining placement of ports, allocators, switch Critical path delay – Determined by allocators and switch traversal Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)47

48 Router Floorplanning 16 Bytes 5x5 Crossbar (0.763 mm 2 ) Req Grant + misc control lines (0.043 mm 2 ) SA + BFC + VA (0.016 mm 2 ) BF P1 (0.041 mm 2 P3 P0 P2 P4 Winter ECE 1749H: Interconnection Networks (Enright Jerger)

49 Router Floorplanning Placing all input ports on left side – Frees up M5 and M6 for crossbar wiring M5 South Output Module Switch East Output Module Local Output Module North Output Module M5 South Output North Output East Input ModuleLocal Input ModuleWest Input ModuleSouth Input ModuleNorth Input Module West Output Module South Input North Input M5 West Output East Output M6 HRHR WRWR Winter ECE 1749H: Interconnection Networks (Enright Jerger)

50 Microarchitecture Summary Ties together topological, routing and flow control design decisions Pipelined for fast cycle times Area and power constraints important in NoC design space Winter ECE 1749H: Interconnection Networks (Enright Jerger)

51 Interconnection Network Summary Latency vs. Offered Traffic Latency Offered Traffic (bits/sec) Min latency given by topology Min latency given by routing algorithm Zero load latency (topology+routing+flo w control) Zero load latency (topology+routing+flo w control) Throughput given by topology Throughput given by routing Throughput given by flow control Winter ECE 1749H: Interconnection Networks (Enright Jerger)

52 Towards the Ideal Interconnect Ideal latency – Solely due to wire delay between source and destination – D = Manhatten distance – L = packet size – b = channel bandwidth – v = propagation velocity Winter ECE 1749H: Interconnection Networks (Enright Jerger)

53 State of the Art Dedicated wiring impractial – Long wires segmented with insertion of routers Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)53

54 Latency Throughput Gap Aggressive speculation and bypassing 8 VCs/port Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)54 Throughput gap Latency gap

55 Towards the Ideal Interconnect Ideal Energy – Only energy of interconnect wires – D = Distance – P wire = transmission power per unit length Winter ECE 1749H: Interconnection Networks (Enright Jerger)

56 State of the Art No longer just wires – P router = buffer read/write power, arbitration power, crossbar traversal Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)56

57 Power Gap Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)57

58 Key Research Challenges Low power on-chip networks – Power consumed largely dependent on bandwidth it has to support – Bandwidth requirement depends on several factors Beyond conventional interconnects – Power efficient link designs – 3D stacking – Optics Resilient on-chip networks – Manufacturing defects and variability – Soft errors and wearout Winter ECE 1749H: Interconnection Networks (Enright Jerger)

59 Next Week Paper 1: Flattened Butterfly – Presenter: Robert Hesse Paper 2: Design and Evaluation of a Hierarchical On-Chip Interconnect – Presenter: Jason Luu Paper 3: Design Trade-offs for Tiled CMP On-Chip Networks Paper 4: Cost-Efficient Dragonfly Two critiques due at the start of class Winter 2010ECE 1749H: Interconnection Networks (Enright Jerger)59


Download ppt "ECE 1749H: Interconnection Networks for Parallel Computer Architectures: Router Microarchitecture Prof. Natalie Enright Jerger Winter 20101ECE 1749H: Interconnection."

Similar presentations


Ads by Google