Download presentation
Presentation is loading. Please wait.
1
1 Internet Routers Stochastics Network Seminar February 22 nd 2002 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University nickm@stanford.edu www.stanford.edu/~nickm
2
2 What a Router Looks Like Cisco GSR 12416Juniper M160 6ft 19” 2ft Capacity: 160Gb/s Power: 4.2kW 3ft 2.5ft 19” Capacity: 80Gb/s Power: 2.6kW
3
3 Points of Presence (POPs) A B C POP1 POP3 POP2 POP4 D E F POP5 POP6 POP7 POP8
4
4 Basic Architectural Components of an IP Router Control Plane Datapath per-packet processing Switching Forwarding Table Routing Table Routing Protocols
5
5 Per-packet processing in an IP Router 1. Accept packet arriving on an ingress line. 2. Lookup packet destination address in the forwarding table, to identify outgoing interface(s). 3. Manipulate packet header: e.g., decrement TTL, update header checksum. 4. Send packet to outgoing interface(s). 5. Queue until line is free. 6. Transmit packet onto outgoing line.
6
6 Generic Router Architecture Lookup IP Address Update Header Header Processing DataHdrDataHdr ~1M prefixes Off-chip DRAM Address Table Address Table IP AddressNext Hop Queue Packet Buffer Memory Buffer Memory ~1M packets Off-chip DRAM
7
7 Generic Router Architecture Lookup IP Address Update Header Header Processing Address Table Address Table Lookup IP Address Update Header Header Processing Address Table Address Table Lookup IP Address Update Header Header Processing Address Table Address Table Buffer Manager Buffer Memory Buffer Memory Buffer Manager Buffer Memory Buffer Memory Buffer Manager Buffer Memory Buffer Memory
8
8 Packet processing is getting harder CPU Instructions per minimum length packet since 1996
9
9 Performance metrics 1. Capacity “maximize C, s.t. volume < 2m 3 and power < 5kW” 2. Throughput Operators like to maximize usage of expensive long-haul links. This would be trivial with work-conserving output-queued routers 3. Controllable Delay Some users would like predictable delay. This is feasible with output-queueing plus weighted fair queueing (WFQ). WFQ
10
10 The Problem Output queued switches are impractical R R R R DRAM NR data R R R R output 1 N Can’t I just use N separate memory devices per output?
11
11 Memory Bandwidth Commercial DRAM 1. It’s hard to keep up with Moore’s Law: The bottleneck is memory speed. Memory speed is not keeping up with Moore’s Law. DRAM 1.1x / 18months Moore’s Law 2x / 18 months Router Capacity 2.2x / 18months Line Capacity 2x / 7 months
12
12 Generic Router Architecture Lookup IP Address Update Header Header Processing Address Table Address Table Lookup IP Address Update Header Header Processing Address Table Address Table Lookup IP Address Update Header Header Processing Address Table Address Table Queue Packet Buffer Memory Buffer Memory Queue Packet Buffer Memory Buffer Memory Queue Packet Buffer Memory Buffer Memory 1 2 N 1 2 N Scheduler
13
13 Outline of next two talks What’s known about throughput Today: Survey of ways to achieve 100% throughput What’s known about controllable delay Next week (Sundar): Controlling delay in routers with a single stage of buffering.
14
14 Potted history 1. [Karol et al. 1987] Throughput limited to by head- of-line blocking for Bernoulli IID uniform traffic. 2. [Tamir 1989] Observed that with “Virtual Output Queues” (VOQs) Head-of-Line blocking is reduced and throughput goes up.
15
15 Potted history 3. [Anderson et al. 1993] Observed analogy to maximum size matching in a bipartite graph. 4. [M et al. 1995] (a) Maximum size match can not guarantee 100% throughput. (b) But maximum weight match can – O(N 3 ). 5. [Mekkittikul and M 1998] A carefully picked maximum size match can give 100% throughput. Matching O(N 2.5 )
16
16 Potted history Speedup 5. [Chuang, Goel et al. 1997] Precise emulation of a central shared memory switch is possible with a speedup of two and a “stable marriage” scheduling algorithm. 6. [Prabhakar and Dai 2000] 100% throughput possible for maximal matching with a speedup of two.
17
17 Potted history Newer approaches 7. [Tassiulas 1998] 100% throughput possible for simple randomized algorithm with memory. 8. [Giaccone et al. 2001] “Apsara” algorithms. 9. [Iyer and M 2000] Parallel switches can achieve 100% throughput and emulate an output queued switch. 10. [Chang et al. 2000] A 2-stage switch with a TDM scheduler can give 100% throughput. 11. [Iyer, Zhang and M 2002] Distributed shared memory switches can emulate an output queued switch.
18
18 Scheduling crossbar switches to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
19
19 Basic Switch Model A 1 (n) S(n) N N L NN (n) A 1N (n) A 11 (n) L 11 (n) 11 A N (n) A NN (n) A N1 (n) D 1 (n) D N (n)
20
20 Some definitions 3. Queue occupancies: Occupancy L 11 (n) L NN (n)
21
21 Some definitions of throughput When traffic is admissible
22
22 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
23
23 Algorithms that give 100% throughput for uniform traffic Quite a few algorithms give 100% throughput when traffic is uniform 1 For example: Maximum size bipartite match. Maximal size match (e.g. PIM, iSLIP, WFA) Deterministic and a few variants Wait-until-full 1. “Uniform”: the destination of each cell is picked independently and uniformly and at random (uar) from the set of all outputs.
24
24 Maximum size bipartite match Intuition: maximizes instantaneous throughput for uniform traffic. L 11 (n)>0 L N1 (n)>0 “Request” Graph Bipartite Match Maximum Size Match
25
25 Aside: Maximal Matching A maximal matching is one in which each edge is added one at a time, and is not later removed from the matching. i.e. no augmenting paths allowed (they remove edges added earlier). No input and output are left unnecessarily idle.
26
26 Aside: Example of Maximal Size Matching A1 B C D E F 2 3 4 5 6 A1 B C D E F 2 3 4 5 6 Maximal Matching Maximum Matching
27
27 Algorithms that give 100% throughput for uniform traffic Quite a few algorithms give 100% throughput when traffic is uniform For example: Maximum size bipartite match. Maximal size match (e.g. PIM, iSLIP, WFA) Determinstic and a few variants Wait-until-full
28
28 Deterministic Scheduling Algorithm If arriving traffic is i.i.d with destinations picked uar across outputs, then a round-robin schedule gives 100% throughput. A1 B C D 2 3 4 B C D 2 3 4 B C D 2 3 4 A1 A1 Variation 1: if permutations are picked uar from the set of N! permutations, this too will also give 100% throughput. Variation 2: if permutations are picked uar from the permutations above, this too will give 100% throughput.
29
29 A Simple wait-until-full algorithm The following algorithm appears to be stable for Bernoulli i.i.d. uniform arrivals: 1.If any VOQ is empty, do nothing (i.e. serve no queues). 2.If no VOQ is empty, pick a permutation uar across either (sequence of permutations, or all permutations).
30
30 Some simple algorithms that achieve 100% throughput
31
31 Some observations A maximum size match (MSM) maximizes instantaneous throughput. But a MSM is complex – O(N 2.5 ). It turns out that there are many simple algorithms that give 100% throughput for uniform traffic. So what happens if the traffic is non- uniform?
32
32 Why doesn’t maximizing instantaneous throughput give 100% throughput for non- uniform traffic? Three possible matches, S (n):
33
33 Simulation of simple 3x3 example
34
34 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
35
35 Example 1: (Trivial) scheduling to achieve 100% throughput Assume we know the traffic matrix, and the arrival pattern is deterministic: Then we can simply choose:
36
36 Example 2:With random arrivals, but known traffic matrix Assume we know the traffic matrix, and the arrival pattern is random: Then we can simply choose: In general, if we know , can we pick a sequence S(n) to achieve 100% throughput?
37
37 Birkhoff - von Neumann Decomposition Any can be decomposed into a linear (convex) combination of matrices, ( M 1, …, M r ).
38
38 In practice… Unfortunately, we usually don’t know traffic matrix a priori, so we can: Measure or estimate , or Not use . In what follows, we will assume we don’t know or use .
39
39 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known Technique: Birkhoff-von Neumann decomposition. 4. When traffic matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
40
40 When the traffic matrix is not known
41
41 Problem
42
42 Maximum weight matching A 1 (n) N N L NN (n) A 1N (n) A 11 (n) L 11 (n) 11 A N (n) A NN (n) A N1 (n) D 1 (n) D N (n) L 11 (n) L N1 (n) “Request” Graph Bipartite Match S*(n) Maximum Weight Match
43
43 Outline of Proof
44
44 Choosing the weight
45
45 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
46
46 100% throughput with pipelining
47
47 100% throughput with incomplete information
48
48 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
49
49 Achieving 100% when algorithm does not complete Randomized algorithms: 1. Basic idea (Tassiulas) 2. Reducing delay (Shah, Giaccone and Prabhakar)
50
50 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch. Technique: Parallel Packet Switch.
51
51 Speedup and Combined Input Output Queueing (CIOQ) A 1 (n) S(n) N N L NN (n) A 1N (n) A 11 (n) L 11 (n) 11 A N (n) A NN (n) A N1 (n) D 1 (n) D N (n) With speedup, the matching is performed s times per cell time, and up to s cells are removed from each VOQ. Therefore, output queues are required.
52
52 Fluid Model [Dai and Prabhakar]
53
53 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch.
54
54 2-stage switch and no scheduler Motivation: 1. If traffic is uniformly distributed, then even a deterministic schedule gives 100% throughput. 2. So why not force non-uniform traffic to be uniformly distributed?
55
55 2-stage switch and no scheduler S 2 (n) N N L NN (n) L 11 (n) 11 D 1 (n) D N (n) N N 11 A’ 1 (n) A’ N (n) S 1 (n) A 1 (n) A N (n) Bufferless Load-balancing Stage Buffered Switching Stage
56
56 2-stage switch with no scheduler
57
57 Scheduling algorithms to achieve 100% throughput 1. Basic switch model. 2. When traffic is uniform (Many algorithms…) 3. When traffic is non-uniform, but traffic matrix is known. Technique: Birkhoff-von Neumann decomposition. 4. When matrix is not known. Technique: Lyapunov function. 5. When algorithm is pipelined, or information is incomplete. Technique: Lyapunov function. 6. When algorithm does not complete. Technique: Randomized algorithm. 7. When there is speedup. Technique: Fluid model. 8. When there is no algorithm. Technique: 2-stage load-balancing switch.
58
58 Throughput results Theory: Practice: Input Queueing (IQ) Input Queueing (IQ) Input Queueing (IQ) Input Queueing (IQ) 58% [Karol, 1987] IQ + VOQ, Maximum weight matching IQ + VOQ, Maximum weight matching IQ + VOQ, Sub-maximal size matching e.g. PIM, iSLIP. IQ + VOQ, Sub-maximal size matching e.g. PIM, iSLIP. 100% [M et al., 1995] Different weight functions, incomplete information, pipelining. Different weight functions, incomplete information, pipelining. Randomized algorithms 100% [Tassiulas, 1998] 100% [Various] Various heuristics, distributed algorithms, and amounts of speedup Various heuristics, distributed algorithms, and amounts of speedup IQ + VOQ, Maximal size matching, Speedup of two. IQ + VOQ, Maximal size matching, Speedup of two. 100% [Dai & Prabhakar, 2000]
59
59 Outline of next talk Sundar Iyer What’s known about controllable delay Emulation of Output queued switches PIFOs and WFQ Single-buffered switches: Parallel packet switches, and distributed shared memory switches.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.