Presentation is loading. Please wait.

Presentation is loading. Please wait.

Designing Packet Buffers for Router Linecards Sundar Iyer, Ramana Kompella, Nick McKeown Reviewed by: Sarang Dharmapurikar.

Similar presentations


Presentation on theme: "Designing Packet Buffers for Router Linecards Sundar Iyer, Ramana Kompella, Nick McKeown Reviewed by: Sarang Dharmapurikar."— Presentation transcript:

1 Designing Packet Buffers for Router Linecards Sundar Iyer, Ramana Kompella, Nick McKeown Reviewed by: Sarang Dharmapurikar

2 Sarang Dharmapurikar 2 Background ●Routers need to buffer packets during congestion  Thumb rule : Buffer size should be RTT x R  With RTT = 0.25s and R = 40 Gbps, Buffer size = 10 Gbits oCan’t use SRAM, they are small and consume too much power oOnly SDRAM provides the required density

3 Sarang Dharmapurikar 3 Problems.. ●SDRAM is slow, hence less memory bandwidth ●Why not use a big data bus to get more memory bandwidth?

4 Sarang Dharmapurikar 4 Answer… 320 bytes Packet A Packet B Packet C underutilized bandwidth

5 Sarang Dharmapurikar 5 Parallel Memory Banks ●However, packet departure order is not known ●Scheduler might request for the packets which happen to be stored in the same bank ●Hence one bank will be busy and others idle, degrading the throughput 320 bytes

6 Sarang Dharmapurikar 6 Alternative ●Cache packets and write them in SDRAM as one word at a time for one queue ●Likewise, read packets from DRAM, one word for a queue at a time, give out the necessary data and cache the rest A1 320 bytes B1 C1 A2 B2 C2

7 Sarang Dharmapurikar 7 Architecture of the Hybrid SRAM-SDRAM buffer

8 Sarang Dharmapurikar 8 Head SRAM buffer w X(i,t)D(i,t) 2 1 i Q ‘b’ bytes leave in b time slots Of these ‘b’ bytes, any byte can be from any queue ‘b’ bytes arrive in ‘b’ time slots all for the same queue Objective : Put a bound on ‘w’ Scheduler

9 Sarang Dharmapurikar 9 Lower Bound on the Head-SRAM size ●Theorem 1:  w > (b-1)(2 + lnQ) ●Example:  Let b = 3  Q = 9 ●Bytes required = (b-1) + (b-1) + under run Additional b-1 bytes starting b-1 bytesUnder run

10 Sarang Dharmapurikar 10 Lower bound on the Head-SRAM size ●Proof of theorem 1: ●First iteration : read one byte from each FIFO  Q/b FIFOs will be replenished with b bytes each  Q(1-1/b) FIFOs will have a deficit of D(i,Q) = 1 ●Second iteration : read one byte from each of Q(1-1/b) FIFOs having D(i,Q) = 1  Q(1-1/b)/b will be replenished with b bytes each  Q(1-1/b) 2 will have a deficit of D(i,Q) = 2 ●Xth iteration :  Q(1-1/b) x FIFOs will have a deficit of D(i,Q) = x ●Solve for Q(1-1/b) x = 1  X > (b-1)lnQ ●Hence, buffer rquirement is : w > (b-1)(2 + lnQ)

11 Sarang Dharmapurikar 11 A Memory Management Algorithm ●Objective: Give an Algorithm that is closer to this bound ●Most Deficit Queue First (MDQF)  Service (replenish) the SRAM FIFO with most deficit first

12 Sarang Dharmapurikar 12 Some Terminology… 1 2 3 4 Q π1π1 π2π2 π3π3 π4π4 πQπQ

13 Sarang Dharmapurikar 13 F(2, t-b) F(1) MDQF analysis ●Lemma1 : F(1) < b[2+lnQ] tt -b i i +b j

14 Sarang Dharmapurikar 14 F(3, t-b) F(2) MDQF Analysis tt -b m m +b p n n

15 Sarang Dharmapurikar 15 F(i+1, t-b) F(i) MDQF Analysis ●Theorem 2: For MDQF to guarantee that a requested byte is in SRAM, it is sufficient to hold b(3 + lnQ) bytes in each Head-FIFO tt -b +b

16 Sarang Dharmapurikar 16 MMA that tolerates bounded pipeline delay ●Pre-compute some of the memory requests to find out which queue under-runs ●Critical Queue : A queue with more requests in the look ahead buffer than bytes to give ●Earliest Critical Queue : A queue that turns critical the earliest

17 Sarang Dharmapurikar 17 Most Deficit Queue First with Pipeline Delay (MDQFP) ●Algorithm:  Replenish the earliest critical queue first  If no critical queues then replenish the one that will most likely become critical in the future ●Lemma3: ●Theorem 3: w = F x (1) + b ●Corollary 1 : x → Qb, w → 2b

18 Sarang Dharmapurikar 18 Tradeoff between SRAM size and pipeline delay x, pipeline delay QF x (1), total SRAM Q = 1000, b=10

19 Sarang Dharmapurikar 19 Dynamic SRAM allocation ●So far all the queues had the same length which was static ●SRAM can allocated dynamically to the queues depending on the requirement  further reduction in SRAM size ●The amount of SRAM can be reduced to Q(b-1) for a Look ahead buffer of Q(b-1) + 1

20 Sarang Dharmapurikar 20 Conclusions ●High capacity and high throughput packet buffers are needed in any router line card ●Packet buffers made out of only SRAMs are impractical, SDRAMs are used ●SDRAM buffer memory used with SRAM cache memory can give the required throughput performance ●Without any pipeline delay the SRAM requirement scales as QblnQ ●With With tolerable delay of Qb time slots, the requirement scales as Qb


Download ppt "Designing Packet Buffers for Router Linecards Sundar Iyer, Ramana Kompella, Nick McKeown Reviewed by: Sarang Dharmapurikar."

Similar presentations


Ads by Google