Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSIT560 by M. Hamdi 1 QoS Algorithms. CSIT560 by M. Hamdi 2 Principles for QOS Guarantees Consider a phone application at 1Mbps and an FTP application.

Similar presentations


Presentation on theme: "CSIT560 by M. Hamdi 1 QoS Algorithms. CSIT560 by M. Hamdi 2 Principles for QOS Guarantees Consider a phone application at 1Mbps and an FTP application."— Presentation transcript:

1 CSIT560 by M. Hamdi 1 QoS Algorithms

2 CSIT560 by M. Hamdi 2 Principles for QOS Guarantees Consider a phone application at 1Mbps and an FTP application sharing a 1.5 Mbps link. –bursts of FTP can congest the router and cause audio packets to be dropped. –want to give priority to audio over FTP PRINCIPLE 1: Marking of packets is needed for router to distinguish between different classes; and new router policy to treat packets accordingly

3 CSIT560 by M. Hamdi 3 Principles for QOS Guarantees (more) Applications misbehave (audio sends packets at a rate higher than 1Mbps assumed above); PRINCIPLE 2: provide protection (isolation) for one class from other classes (Fairness)

4 CSIT560 by M. Hamdi 4 BandwidthBandwidth DelayDelay The path as perceived by a packet! A A B B QoS Metrics What are we trying to control? Four metrics are used to describe a packet’s transmission through a network – Bandwidth, Delay, Jitter, and Loss Using a pipe analogy, then for each packet:  Bandwidth is the perceived width of the pipe  Delay is the perceived length of the pipe  Jitter is the perceived variation in the length of the pipe  Loss is the perceived leakiness if the pipe

5 CSIT560 by M. Hamdi 5 Internet QoS Overview Integrated services Differentiated Services MPLS Traffic Engineering

6 CSIT560 by M. Hamdi 6 QoS: State information No State Vs. Soft State Vs. Hard State No State IP Circuit Switched ATM Intserv/ RSVP diffserv Dedicated Circuit Hard State Soft State No State inside the network Flow information at the edges Packet Switched

7 CSIT560 by M. Hamdi 7 QoS Router Policer Classifier Policer Classifier Per-flow Queue Scheduler Per-flow Queue Scheduler Per-flow Queue shaper Queue management

8 CSIT560 by M. Hamdi 8 Queuing Disciplines First come first serve Class 1 Class 2 Class 3 Class 4 Class based scheduling Scheduler flow 1 flow 2 flow n Classifier Buffer management

9 CSIT560 by M. Hamdi 9 DiffServ DiffServ Domain Premium Gold Silver Bronze PHB LLQ/WRED Classification / Conditioning

10 CSIT560 by M. Hamdi 10 Differentiated Service (DS) Field VersionHLen TOSLength Identification Fragment offset Flags Source address Destination address TTLProtocolHeader checksum 0 48161931 Data IP header DS filed reuse the first 6 bits from the former Type of Service (TOS) byte to determine the PHB DS Field 05 6 7

11 CSIT560 by M. Hamdi 11 R2 A R1 R3 B R4 A RESV A A A Integrated Services RSVP and Traffic Flow example PATH message will leave the IP address of the previous hop node in each router. Contains Sender Tspec, Sender Temp, Adspec. Admission/policy control determines if the node has sufficient available resources to handle the request. If request is granted, bandwidth and buffer is allocated. A RESV message containing a flowspec and a filterspec must be sent in the exact reverse path. The flowspec (T-spec/R-spec) defines the QoS and the traffic characteristics being requested. Reserved buffer and bw BPATH Data B RSVP maintains soft state information (DstAddr, Protocol, DstPort) in the routers. All packets will get MF classification treatment and put in the appropriate queue. Routers enforce MF classification and put packets in the appropriate queue. The scheduler will then serve these queues. Phop = A Phop = R1 Phop = R2 BPATHB B

12 CSIT560 by M. Hamdi 12 IntServ: Per-flow classification Sender Receiver

13 CSIT560 by M. Hamdi 13 Per-flow buffer management Sender Receiver

14 CSIT560 by M. Hamdi 14 Per-flow scheduling Sender Receiver

15 CSIT560 by M. Hamdi 15 Max-Min Fairness An allocation is fair if it satisfies max-min fairness –each connection gets no more than what it wants –the excess, if any, is equally shared

16 CSIT560 by M. Hamdi 16 Max-Min Fairness A common way to allocate flows N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). 1.Pick the flow, f, with the smallest requested rate. 2.If W(f) < C/N, then set R(f) = W(f). 3.If W(f) > C/N, then set R(f) = C/N. 4.Set N = N – 1. C = C – R(f). 5.If N>0 goto 1.

17 CSIT560 by M. Hamdi 17 1 W(f 1 ) = 0.1 W(f 3 ) = 10 R1R1 C W(f 4 ) = 5 W(f 2 ) = 0.5 Max-Min Fairness An example Round 1: Set R(f 1 ) = 0.1 Round 2: Set R(f 2 ) = 0.9/3 = 0.3 Round 3: Set R(f 4 ) = 0.6/2 = 0.3 Round 4: Set R(f 3 ) = 0.3/1 = 0.3

18 CSIT560 by M. Hamdi 18 Fair Queueing 1.Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. 2.FIFOs are scheduled one bit at a time, in a round-robin fashion. 3.This is called Bit-by-Bit Fair Queueing. Flow 1 Flow N ClassificationScheduling Bit-by-bit round robin

19 CSIT560 by M. Hamdi 19 Weighted Bit-by-Bit Fair Queueing Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. 1 R(f 1 ) = 0.1 R(f 3 ) = 0.3 R1R1 C R(f 4 ) = 0.3 R(f 2 ) = 0.3 Order of service for the four queues: … f 1, f 2, f 2, f 2, f 3, f 3, f 3, f 4, f 4, f 4, f 1,… Also called “Generalized Processor Sharing (GPS)”

20 CSIT560 by M. Hamdi 20 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 Time 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1 B1 A2 = 2 C3 = 2 Time Weights : 3:2:2:1 Round 1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1 B1 A2 = 2 C3 = 2 D1, C2, C1 Depart at R=1 Time B1C1C2D1 Weights : 3:2:2:1 Round 1

21 CSIT560 by M. Hamdi 21 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A2 = 2 C3 = 2 B1, A2 A1 Depart at R=2 Time A1 B1 C1C2D1A1A2 B1 Round 1Round 2 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A2 = 2 C3 = 2 D2, C3 Depart at R=2 Time A1 B1 C1C2D1A1A2 B1C3 D2 Round 1Round 2 3 Weights : 1:1:1:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C3 = 2C1 = 1 C1C2D1A1 A2 B1 A2 = 2 C3 D2 Departure order for packet by packet WFQ: Sort by finish time of packets Time Sort packets

22 CSIT560 by M. Hamdi 22 Packetized Weighted Fair Queueing (WFQ) Problem: We need to serve a whole packet at a time. Solution: 1.Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, F p. 2.Serve packets in the order of increasing finishing time. Also called “Packetized Generalized Processor Sharing (PGPS)”

23 CSIT560 by M. Hamdi 23 WFQ is complex There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow. The finishing time must be calculated for each arriving packet, Packets must be sorted by their departure time. Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ! 1 2 3 N Packets arriving to egress linecard Calculate F p Find Smallest F p Departing packet Egress linecard

24 CSIT560 by M. Hamdi 24 When can we Guarantee Delays? Theorem If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.

25 CSIT560 by M. Hamdi 25 Traffic Managers: Active Queue Management Algorithms

26 CSIT560 by M. Hamdi 26 Queuing Disciplines Each router must implement some queuing discipline Queuing allocates both bandwidth and buffer space: –Bandwidth: which packet to serve (transmit) next - This is scheduling –Buffer space: which packet to drop next (when required) – this buffer management Queuing affects the delay of a packet (QoS)

27 CSIT560 by M. Hamdi 27 Queuing Disciplines Traffic Sources Class C Class B Class A Traffic Classes Drop Scheduling Buffer Management

28 CSIT560 by M. Hamdi 28 Active Queue Management Queue Sink Outbound LinkRouterInbound Link Sink TCP ACK… Queue Sink Outbound LinkRouterInbound Link Sink TCP ACK… Queue Sink Outbound LinkRouterInbound Link Sink TCP ACK… Drop!!! Queue Sink Outbound LinkRouterInbound Link Sink TCP Queue Sink Outbound LinkRouterInbound Link Sink TCP AQM Congestion Congestion Notification… ACK… Queue Sink Outbound LinkRouterInbound Link Sink TCP AQM Advantages Reduce packet losses (due to queue overflow) Reduce queuing delay

29 CSIT560 by M. Hamdi 29 QoS Router Policer Classifier Policer Classifier Per-flow Queue Scheduler Per-flow Queue Scheduler Per-flow Queue shaper Queue management

30 CSIT560 by M. Hamdi 30 Packet Drop Dimensions Aggregation Per-connection state Single class Drop position Head Tail Random location Class-based queuing Early drop Overflow drop

31 CSIT560 by M. Hamdi 31 Typical Internet Queuing FIFO + drop-tail –Simplest choice –Used widely in the Internet FIFO (first-in-first-out) –Implies single class of traffic Drop-tail –Arriving packets get dropped when queue is full regardless of flow or importance Important distinction: –FIFO: scheduling discipline –Drop-tail: drop policy (buffer management)

32 CSIT560 by M. Hamdi 32 FIFO + Drop-tail Problems FIFO Issues: (irrespective of the aggregation level) –No isolation between flows: full burden on e2e control (e..g., TCP) –No policing: send more packets  get more service Drop-tail issues: –Routers are forced to have have large queues to maintain high utilizations –Larger buffers => larger steady state queues/delays –Synchronization: end hosts react to the same events because packets tend to be lost in bursts –Lock-out: a side effect of burstiness and synchronization is that a few flows can monopolize queue space

33 CSIT560 by M. Hamdi 33 Synchronization Problem Because of Congestion Avoidance in TCP cwnd TimeRTT 1 2 4 Slow Start W* W W+1 RTT Congestion Avoidance W*/2

34 CSIT560 by M. Hamdi 34 Synchronization Problem Queue Size Time Total Queue All TCP connections reduce their transmission rate on crossing over the maximum queue size. The TCP connections increase their tx rate using the slow start and congestion avoidance. The TCP connections reduce their tx rate again. It makes the network traffic fluctuate.

35 CSIT560 by M. Hamdi 35 Global Synchronization Problem Can result in very low throughput during periods of congestion Max Queue Length

36 CSIT560 by M. Hamdi 36 Global Synchronization Problem  TCP Congestion control Synchronization: leads to bandwidth under-utilization Persistently full queues: leads to large queueing delays Cannot provide (weighted) fairness to traffic flows – inherently proposed for responsive flows Flow 1 Rate Time Flow 2 Aggregate load bottleneck rate

37 CSIT560 by M. Hamdi 37 Lock-out Problem Lock-Out: In some situations tail drop allows a single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. This "lock-out" phenomenon is often the result of synchronization. Lock-Out: In some situations tail drop allows a single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. This "lock-out" phenomenon is often the result of synchronization. Max Queue Length

38 CSIT560 by M. Hamdi 38 Bias Against Bursty Traffic During dropping, bursty traffic will be dropped in benchs – which is not fair for bursty connections Max Queue Length

39 CSIT560 by M. Hamdi 39 Active Queue Management Goals Solve lock-out and full-queue problems –No lock-out behavior –No global synchronization –No bias against bursty flow Provide better QoS at a router –Low steady-state delay –Lower packet dropping

40 CSIT560 by M. Hamdi 40 RED (Random Early Detection) FIFO scheduling Buffer management: –Probabilistically discard packets –Probability is computed as a function of average queue length Discard Probability Average Queue Length 0 1 min_thmax_th queue_len

41 CSIT560 by M. Hamdi 41 Random Early Detection (RED)

42 CSIT560 by M. Hamdi 42 RED operation Min thresh Max thresh Average queue length minthreshmaxthresh MaxP 1.0 Avg length P(drop)

43 CSIT560 by M. Hamdi 43 Define Two Threshold Values RED (Random Early Detection) FIFO scheduling Min thresh Max thresh Average queue length Make Use of Average Queue Length Case 1: Average Queue Length < Min. Thresh Value Admit the New Packet

44 CSIT560 by M. Hamdi 44 RED (Cont’d) Min thresh Max thresh Average queue length Case 2: Average Queue Length between Min. and Max. Threshold Value p 1-p Admit the New Packet With Probability p … p 1-p Or Drop the New Packet With Probability 1-p

45 CSIT560 by M. Hamdi 45 Random Early Detection Algorithm ave = (1 – w q )ave + w q q P = max_P*(avg_len – min_th)/(max_th – min_th) for each packet arrival: calculate the average queue size ave if ave ≤ min th do nothing else if min th ≤ ave ≤ max th calculate drop probability p drop arriving packet with probability p else if max th ≤ ave arriving packet drop the arriving packet

46 Random early detection (RED) packet drop Max threshold Min threshold Average queue length Forced drop Probabilistic early drop No drop Time Drop probability Max queue length

47 CSIT560 by M. Hamdi 47 Time Max Queue Size Active Queue Management Random Early Detection (RED) Weighted average accommodates bursty traffic Max Threshold Min Threshold Forced drop Probabilistic drops No drops Drop probability Average queue length l l Probabilistic drops » »avoid consecutive drops » »drops proportional to bandwidth utilization – –(drop rate equal for all flows)

48 CSIT560 by M. Hamdi 48 RED Vulnerable to Misbehaving Flows 0102030405060708090100 0 200 400 600 800 1,000 1,200 1,400 FIFO RED UDP blast TCP Throughput (Kbytes/Sec) Time (seconds)

49 CSIT560 by M. Hamdi 49 Effectiveness of RED - Lock-Out & Global Synchronization Packets are randomly dropped Each flow has the same probability of being discarded

50 CSIT560 by M. Hamdi 50 Effectiveness of RED - Full-Queue & Bias against bursty traffic Drop packets probabilistically in anticipation of congestion –Not when queue is full Use q avg to decide packet dropping probability : allow instantaneous bursts

51 CSIT560 by M. Hamdi 51 What QoS does RED Provide? Lower buffer delay: good interactive service –q avg is controlled to be small Given responsive flows: packet dropping is reduced –Early congestion indication allows traffic to throttle back before congestion RED provide small delay, small packet loss, and high throughput (when it has responsive flows).

52 CSIT560 by M. Hamdi 52 Weighted RED (WRED) WRED provides separate thresholds and weights for different IP precedences, allowing us to provide different quality of service to different traffic Lower priority class traffic may be dropped more frequently than higher priority traffic during periods of congestion

53 CSIT560 by M. Hamdi 53 Random Dropping WRED (Cont..) High Priority traffic Medium Priority traffic Low Priority traffic

54 CSIT560 by M. Hamdi 54 Average Queue Depth Standard Minimum Threshold Premium Minimum Threshold Std and Pre Maximum Threshold Adds Per-Class Queue Thresholds for Differential Treatment Two Classes are Shown; Any number of classes Can Be Defined Congestion Avoidance: Weighted Random Early Detection (WRED) Probability of Packet Discard

55 CSIT560 by M. Hamdi 55 Problems with (W)RED – unresponsive flows

56 CSIT560 by M. Hamdi 56 Vulnerability to Misbehaving Flows TCP performance on a 10 Mbps link under RED in the face of a “UDP” blast

57 CSIT560 by M. Hamdi 57 Vulnerability to Misbehaving Flows Try to look at the following example: Assume there is a network which is set up as: UDP sources R1R2 S(m) S(1) S(m+1) S(m+n) S(m) S(1) S(m+1) S(m+n) 10Mbps 100Mbps TCP sources UDP sources

58 CSIT560 by M. Hamdi 58 Vulnerability to Misbehaving Flows

59 CSIT560 by M. Hamdi 59 Vulnerability to Misbehaving Flows Queue Size versus Time Delay is bounded Delay is bounded Global Synchronization solved RED: Queue Size

60 CSIT560 by M. Hamdi 60 Unfairness of RED Unresponsive Flow (such as UDP) 32 TCP Flows 1 UDP Flow 32 TCP Flows 1 UDP Flow An unresponsive flow occupies over 95% of bandwidth An unresponsive flow occupies over 95% of bandwidth

61 CSIT560 by M. Hamdi 61 Scheduling & Queue Management What routers want to do? –Isolate unresponsive flows (e.g., UDP) –Provide Quality of Service to all users Two ways to do it –Scheduling algorithms: e.g., WFQ, WRR –Queue management algorithms: e.g., RED, FRED, SRED

62 CSIT560 by M. Hamdi 62 The setup and problems l l In a congested network with many users l l QoS requirements are different l l Problem: q q Allocate bandwidth fairly

63 CSIT560 by M. Hamdi 63 l l Network node: Weighted Fair Queueing (WFQ) l l User traffic: any type Problem: complex implementation lots of work per flow Approach 1: Network-Centric

64 CSIT560 by M. Hamdi 64 Approach 2: User-Centric l l Network node : n n simple FIFO buffer; n n active queue management (AQM): RED l l User traffic: congestion-aware (e.g. TCP) Problem: requires user cooperation

65 CSIT560 by M. Hamdi 65 Current Trend l l Network node: n n simple FIFO buffer n n AQM schemes with enhancement to provide fairness: preferential dropping packets l l User traffic: any type

66 CSIT560 by M. Hamdi 66 Packet Dropping Schemes l l Size-based Schemes n n drop decision based on the size of FIFO queue n n e.g. RED l l Content-based Schemes n n drop decision based on the current content of the FIFO queue n n e.g. CHOKe l l History-based Schemes n n keep a history of packet arrivals/drops to guide drop decision n n e.g. SRED, RED with penalty box, AFD

67 CSIT560 by M. Hamdi 67 CHOKe (no state information)

68 CSIT560 by M. Hamdi 68 Random Sampling from Queue A randomly chosen packet more likely from the unresponsive flow Unresponsive flows can’t fool the system

69 CSIT560 by M. Hamdi 69 Comparison of Flow ID Compare the flow id with the incoming packet –More accurate –Reduce the chance of dropping packets from a TCP- friendly flows

70 CSIT560 by M. Hamdi 70 Dropping Mechanism Drop packets (both incoming and matching samples) –More arrival  More Drop –Give users a disincentive to send more

71 CSIT560 by M. Hamdi 71 CHOKe (Cont’d) Min thresh Max thresh Average queue length Case 1: Average Queue Length < Min. Thresh Value Admit the New Packet

72 CSIT560 by M. Hamdi 72 CHOKe (Cont’d) Min thresh Max thresh Average queue length p 1-p Case 2: Avg. Queue Length is between Min. and Max. Threshold Values A packet is randomly chosen from the queue to compare with the new arrival packet If they are from different flows, the same logic in RED applies If they are from the same flow, both packets will be dropped

73 CSIT560 by M. Hamdi 73 CHOKe (Cont’d) Min thresh Max thresh Average queue length Case 3: Avg. Queue Length > Max. Threshold Value A random packet will be chosen for comparison If they are from different flows, the new packet will be dropped If they are from the same flow, both packets will be dropped

74 CSIT560 by M. Hamdi 74 Simulation Setup

75 CSIT560 by M. Hamdi 75 Network Setup Parameters 32 TCP flows, 1 UDP flow All TCP’s maximum window size = 300 All links have a propagation delay of 1ms FIFO buffer size = 300 packets All packets sizes = 1KByte RED: (min th, max th ) = (100,200) packets

76 CSIT560 by M. Hamdi 76 32 TCP, 1 UDP (one sample)

77 CSIT560 by M. Hamdi 77 32 TCP, 5 UDP (5 samples)

78 CSIT560 by M. Hamdi 78 How Many Samples to Take? Different samples for different Qlen avg –# samples decrease when Qlen avg close to min th –# samples increase when Qlen avg close to max th

79 CSIT560 by M. Hamdi 79 32 TCP, 5 UDP (self-adjusting)

80 CSIT560 by M. Hamdi 80 Two Problems of CHOKe Problem I: –Unfairness among UDP flows of different rates Problem II: –Difficulty in choosing automatically how many to drop

81 CSIT560 by M. Hamdi 81 SAC (Self Adjustable CHOKe Tries to Solve the previously mentioned two problems

82 CSIT560 by M. Hamdi 82 SAC Problem 1: Unfairness among UDP flows of different rates (e.g., when k =1, the UDP flow 31 (6 Mbps) has 1/3 throughput of UDP flow 32 (1 Mbps), and when k =10, throughput of UDP flow 31 is almost 0).

83 CSIT560 by M. Hamdi 83 SAC Problem 2: Difficulty in choosing automatically how many to drop (when k = 4, UDPs occupy most of the BW. When k =10, relatively good fair sharing, and when k = 20, TCPs get most of the BW).

84 CSIT560 by M. Hamdi 84 SAC Solutions: 1.Search from the tail of the queue for a packet with the same flow number and drop this packet instead of random dropping – because the higher a flow rate is, the more likely its packets will gather at the rear of the queue. The queue occupancy will be more evenly distributed among the flows. 2. Automate the process of determining k according to traffic status (number of active flows and number of UDP flows)

85 CSIT560 by M. Hamdi 85 SAC Once an incoming UDP is compared with a randomly selected packet, if they are of the same flow, P is updated in this way: P  (1-w p ) P + w p. If they are of different flows, P is updated as follows: P  (1-w p ) P. If P is small, then there are more competing flows, and we should increase the value of k. Once there is an incoming packet, if it is a UDP packet, R is updated in this way: R  (1-w r ) R+ w r.. If it is a TCP packet, R is updated as follows: R  (1-w r ) R. If R is large, then we have a large amount of UDP traffic, and we should increase k to drop more UDP packets.

86 CSIT560 by M. Hamdi 86 SAC simulation Throughput per flow (30 TCP flows and 2 UDP flows of different rate)

87 CSIT560 by M. Hamdi 87 SAC simulation Throughput per flow (30 TCP flows and 4 UDP flows of the same rate).

88 CSIT560 by M. Hamdi 88 SAC simulation Throughput per flow (20 TCP flows and 4 UDP flows of different rates)

89 CSIT560 by M. Hamdi 89 AQM Using “Partial” state information

90 CSIT560 by M. Hamdi 90 Congestion Management and Avoidance: Goal  Provide fair bandwidth allocation similar to WFQ  Be simple to implement like RED Simplicity Fairness WFQ RED Ideal

91 CSIT560 by M. Hamdi 91 Objective: achieve fairness close to that of max-min fairness 1. 1.If W(f) < C/N, then set R(f) = W(f). 2. 2.If W(f) > C/N, then set R(f) = C/N. Formulation: – –Ri: the sending rate of flow i – –Di: the drop probability of flow i – –Ideally, we want » »Ri (1 – Di) = R fair (equal share) » »Di = (1 – R fair /Ri) + (That is, drop the excess) AQM Based on Capture Recapture

92 CSIT560 by M. Hamdi 92 AQM Based on Capture-Recapture: Incoming packets Active Queue Management The estimation of the sending rate The estimation of the fair share The adjustment mechanism The key question is: how to estimate the sending rate (Ri) and the fair share (R fair ) !!! Fair allocation of BW

93 CSIT560 by M. Hamdi 93 Capture-Recapture Models The CR models were originally developed for estimating demographic parameters of animal populations (e.g., population size, number of species, etc.). –It is an extremely useful method where inspecting the whole state space is infeasible or very costly –Numerous models have been developed to various situtations The CR models are being used in many diverse fields ranging from software inspection to epidemiology. It is based on several key ideas: animals are captured randomly, marked, released and then recaptured randomly from the population.

94 CSIT560 by M. Hamdi 94

95 CSIT560 by M. Hamdi 95

96 CSIT560 by M. Hamdi 96 Time is then allowed for the marked individuals to mix with the unmarked individuals.

97 CSIT560 by M. Hamdi 97

98 CSIT560 by M. Hamdi 98 Then another sample is captured.

99 CSIT560 by M. Hamdi 99

100 CSIT560 by M. Hamdi 100 Capture-Recapture Model Unknown number of fish in a lake Catch a sample and mark them Let them loose Recapture a sample and look for marks Estimate population size n1 = number in first sample 15 n2 = number in second sample 10 n12 = number in both samples 5 N = total population size assume that n1/N = n12/n2 therefore 15/N = 5/10 N = (10 x 15) / 5 = 30

101 CSIT560 by M. Hamdi 101 Capture-Recapture Models Simple model: estimate a homogeneous population of animals (N): –n 1 animals are captured (marked) –n2 animals were recaptured, and –m2 of these appeared to be marked. Under this simple capture recapture model (M 0 ): m 2 /n 2 = n 1 /N N n1 N n2 N n1 n2 m2

102 CSIT560 by M. Hamdi 102 Capture-Recapture Models The capture probability refers to the chance that an individual animal get caught. M 0 implies that the capture probability for all animals are the same. –‘0’ refers to constant capture probability Using the M h model, the capture probabilities vary by animal, sometimes for reasons like difference in species, sex, or age, etc.. –‘ h’ refers to heterogeneity.

103 CSIT560 by M. Hamdi 103 Capture-Recapture Models Estimation of N under the M h Model is based on the capture frequency data f 1, f 2 …, and f t (t captures) –f 1 is the number of animals that were caught only once, –f 2 is the number of animals that were caught only twice, … etc. The jackknife estimator of N is computed as a linear combination of these capture frequencies, s.t.: N = a 1 f 1 + a 2 f 2 + … + a t f t where a i are coefficients which are a function of t.

104 CSIT560 by M. Hamdi 104 AQM Based on Capture-Recapture The key question is: how to estimate the sending rate (Ri) and the fair share (R fair ) !!! We use an arrival buffer to store the recently arrived packet headers (we can have control over how large the buffer is, and is a better representation of the nature of the flows when compared to the sending buffer): 1.We estimate Ri using the M 0 capture-recapture model 2.We estimate R fair using the M h capture-recapture model (by estimating the number of active flows).

105 CSIT560 by M. Hamdi 105 AQM Based on Capture-Recapture  Ri is estimated for every arriving packet (we can increase the accuracy by having multiple captures, or decrease it by capturing packets periodically) If the arrival buffer is of size B, and the number of captured packets is Ci, then Ri = R Ci/B where R is the aggregate arrival rate  Rfair may not change every single time slot (as a result, the capturing and the calculation of the number of active flows could be done independently of the arrival of each incoming packet) R fair = R/(number of active flows)  The capture-recapture model gives us a lot of flexibility in terms of accuracy vs. complexity  The same capture-recapture can be used for calculating both Ri and R fair

106 CSIT560 by M. Hamdi 106 AQM Based on Capture- Recapture Incoming packets Active Queue Management (Capture-Recapture) The estimation of Ri by the M 0 model The estimation of Rfair by the Mh CR model Di = (1 – Rfair/Ri)+ Fair allocation of BW

107 CSIT560 by M. Hamdi 107 Performance evaluation This is a classical setup that researchers use to evaluate AQM schemes (we can vary many parameters, responsive vs. non- responsive connections, the nature of responsiveness, link delays, etc.) UDP sources R1R2 S(m) S(1) S(m+1) S(m+n) S(m) S(1) S(m+1) S(m+n) 10Mbps 100Mbps TCP sources UDP sources

108 CSIT560 by M. Hamdi 108 Performance evaluation Estimation of the number of flows

109 CSIT560 by M. Hamdi 109 Performance evaluation Bandwidth allocation comparison between CAP and RED

110 CSIT560 by M. Hamdi 110 Performance evaluation Bandwidth allocation comparison between CAP and SRED

111 CSIT560 by M. Hamdi 111 Performance evaluation Bandwidth allocation comparison between CAP and RED-PD

112 CSIT560 by M. Hamdi 112 Performance evaluation Bandwidth allocation comparison between CAP and SFB

113 CSIT560 by M. Hamdi 113 Normalized Measure of Performance A single comparison of the fairness using a normalized value, where norm is defined as: where b i is ideal fair share, b j is the bandwidth received by each flow Thus, ||BW|| = 0 for the ideal fair sharing

114 CSIT560 by M. Hamdi 114 Normalized Measure of Performance

115 CSIT560 by M. Hamdi 115 Performance Evaluation: Variable amount of unresponsiveness


Download ppt "CSIT560 by M. Hamdi 1 QoS Algorithms. CSIT560 by M. Hamdi 2 Principles for QOS Guarantees Consider a phone application at 1Mbps and an FTP application."

Similar presentations


Ads by Google