Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary.

Similar presentations


Presentation on theme: "1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary."— Presentation transcript:

1 1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary

2 2 Introduction l Up to now, we have assumed bufferless switches and bufferless switch fabrics l When contention occurs, cells are dropped l Not practical to do this!

3 3 Alternatives l Buffering –a cell that cannot be transmitted on its desired path or port right now can wait in a buffer to try again later –several possibilities: input buffering, output buffering, crosspoint (internal) buffering, combination thereof

4 4 Alternatives (Cont’d) l Recirculation –a cell that cannot be transmitted on its desired path or port right now is sent back to the input ports using a recirculation line to try again in the next time slot (with higher priority) –hopefully will get through next time

5 5 Alternatives (Cont’d) l Deflection routing –a cell that cannot be transmitted on its desired path or port right now is sent out “another” (available) port instead, in the hope that it will find an alternate path to its destination –example: tandem banyan

6 6 Alternatives (Cont’d) l Redundant paths –design a switch fabric with multiple possible paths from each input port to each output port (e.g., Benes) –greater freedom for path selection –flexible, adaptive, less contention –works well with deflection routing

7 7 Buffering Issues l There are three main factors that affect the performance of switch buffering strategies l Buffer location l Buffer size l Buffer management strategy

8 8 Buffer Location l Several choices l Input buffering l Output buffering l Internal buffering l Combination of the above

9 9 Input Buffering l In the event of output port contention (which can be detected ahead of time at the input ports), let one of the contending cells (chosen at random) go ahead, and hold the other(s) at the input ports l Others try to go through the switch fabric the next chance they get

10 10 Input Buffering (Cont’d) l Can be a poor choice! l Input buffering suffers from the Head of the Line (HOL) blocking problem l Can significantly degrade the performance of the switch

11 11 HOL Blocking Problem l The cell at the head of the input queue cannot go because of output port contention l Because of the FCFS nature of the queue, all cells behind the head cell are also blocked from going l Even if the output port that they want is idle!!!!

12 12 HOL Blocking Example 2 x 2 Switch OUTPUT 0 OUTPUT 1 INPUT 1 INPUT 0

13 13 HOL Blocking Example 2 x 2 Switch 0 1 Two arrivals

14 14 HOL Blocking Example 2 x 2 Switch 0 1 Two departures

15 15 HOL Blocking Example 2 x 2 Switch 1 0 Two arrivals

16 16 HOL Blocking Example 0 1 Two departures

17 17 HOL Blocking Example 2 x 2 Switch 0 0 Two arrivals

18 18 HOL Blocking Example 2 x 2 Switch 0 0 One departure

19 19 HOL Blocking Example 2 x 2 Switch 1 01 Two arrivals

20 20 HOL Blocking Example 0 1 1 Two departures

21 21 HOL Blocking Example 2 x 2 Switch 1 11 Two arrivals

22 22 HOL Blocking Example 2 x 2 Switch 1 1 1 One departure

23 23 HOL Blocking Example 2 x 2 Switch 1 10 One arrival

24 24 HOL Blocking Example 2 x 2 Switch 1 10 One departure HOL Blocking

25 25 HOL Blocking Example 2 x 2 Switch 10 0 One arrival

26 26 HOL Blocking Example 0 1 0 Two departures

27 27 HOL Blocking Example 2 x 2 Switch 0 No arrivals

28 28 HOL Blocking Example 2 x 2 Switch 0 One departure

29 29 HOL Blocking: Summary l Cells can end up waiting at input ports even if their desired output port is idle l How often can this happen? l For a 100% loaded 2x2 switch, HOL blocking happens 25% of time l Effective throughput: 0.75

30 30 HOL Blocking (Cont’d) l The HOL blocking problem does NOT go away on larger mesh sizes l In fact, it even gets worse!!!

31 NMaximum Throughput1 20.75 30.6825 40.6553 50.6399 60.6302 70.6234 80.6184 0.5858 Maximum Throughput for Input Buffering

32 NUMBER OF PORTS (N) Maximum Throughput for Input Buffering MAXIMUM ACHIEVABLE THROUGHPUT 020406080100 0.8 0.5 0.6 0.7

33 33 Solutions for HOL Blocking l Non-FIFO service discipline l Lookahead “windowing” schemes –e.g., if front cell is blocked, then try the next cell, and so on –maximum lookahead W (e.g., W=8) –called “HOL bypass”

34 34 Solutions for HOL Blocking (Cont’d) l Don’t use input buffering! l Use output buffering instead

35 35 Output Buffering l In the event of output port contention, send all the cells through the switch fabric, letting one of the contending cells (chosen at random) use the output port, but holding the other(s) in the buffers at the output ports

36 36 Output Buffering (Cont’d) l Main difference: cells have already gone through the switch fabric l As soon as the port is idle, the cells go out (i.e., work conserving) l Nothing else can get in their way l Achieves maximum possible throughput

37 37 Buffer Sizing l Need buffer size large enough to keep cell loss below an acceptable threshold (e.g., CLR = 0.000001) l Purpose of buffers is to handle short term statistical fluctuations in queue length

38 38 Buffer Sizing (Cont’d) l Obvious fact #1: the larger the buffer size, the lower the cell loss l Obvious fact #2: the larger the buffer size, the larger the maximum possible queuing delay (and the cost of the switch!) l Tradeoff: cell loss versus cell delay (and cell delay jitter) (and cost)

39 39 Buffer Sizing (Cont’d) l Reality: finite buffers –e.g., 100’s or 1000’s of cells per port l Buffers need to be large enough to handle the bursty characteristics of integrated ATM traffic l General rule of thumb: buffer size = 10 x max burst size

40 40 Buffer Management l In a shared memory switch, for example, there is a choice of using dedicated buffers for each port (called partitioned buffering) or using a common pool of buffers shared by all ports (called shared buffering)

41 Partitioned Buffers SHARED MEMORY

42 Shared Buffers SHARED MEMORY

43 43 Buffer Mgmt (Cont’d) l Shared buffering offers MUCH better cell loss performance l Partititioned is perhaps easier to design and build l Shared is more complicated to design, build, and control l Shared is superior (uniform traffic)

44 44 Summary l There are a wide range of choices to make for buffering in ATM switches l Main issues: –buffer location –buffer size –buffer management strategy l Major impact on performance


Download ppt "1 Buffering Strategies in ATM Switches Carey Williamson Department of Computer Science University of Calgary."

Similar presentations


Ads by Google