Presentation is loading. Please wait.

Presentation is loading. Please wait.

High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is 166666 packets (1500.

Similar presentations


Presentation on theme: "High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is 166666 packets (1500."— Presentation transcript:

1 High speed TCP’s

2 Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is 166666 packets (1500 byte). If w(0)=1, then slow start will take over 17 RTT’s to fill the pipe. During this period, 790MB would be sent. After a drop, then cwnd=166666/2 pkts, it takes 83333 RTTs to get again fill the pipe. This takes 4.6 hours for the pipe to fill. During this time 15 petabytes (~120*10^12 bits) would be sent. If the bit-error rate is 10^-12, then you will likely get a bit-error before the pipe is full. The problem is that TCP increases its rate too slowly, and decreases it too fast. High-speed TCP algorithms tried to get around this problem. Goals of high-speed TCPs –Rapidly increase the data rate and maintain a high data rate. –Be fair with normal TCP (e.g. SACK) This is especially required when sharing congested links. We cannot have two protocol stacks, one for congested networks and one for uncongested networks.

3 Fast TCP – Steve Low When there is no congestion, the queue is empty. When the window is so large that congestion is occurring, the RTT will be large. –Recall that if cwnd>bandwidth delay product, then the queue must hold the excess (cwnd-BWDP) -> queueing delay. Idea: use increases in queueing delay as indications as congestion –Pros Every packet is an observation of RTT, where as estimating drop prob Requires more observations. It is possible to increase the cwnd until RTT has only slightly increased. This would mean that there is so queueing delay, but the queue is not full. TCP always fills the queue which should double the delay (note that the queue capacity should be the same as the bandwidth delay product.) –Cons Queueing might not mean congestion. E.g., if packets are sent in a burst, then queuing delay will occur, but it doesn’t mean congestion. When short-lived flows and longer lived TCP flows send data on a fat pipe, the RTT is often highly variable, it can be difficult to determine an average. When there are small flows, it is not clear if past RTT is a predictor of future RTT and hence congestion. One must estimate the minimum RTT. The statistical properties of queueing delay estimators is an open area of research.

4 Fast-TCP algorithm Find average queuing delay over one RTT: –Let S be a smoothed version of RTT and T(k) be the kth observed RTT. –S(k+1) = (1-h)S(k) + hT(k) –h= min(3/w, ¼) = ¼ for w>12. If w>>1, then at the end of the round, S(k) ~ the RTT at the end of the round, not the average over the entire round. The estimate of the queueing delay is –q(k) = S(k) – d(k) –d(k) is an estimate of the minimum RTT (transmission delay). Usually d(k)=min{T(j): j<k}. Window control –w <- min(2w, (1-  )w +  (w*d/RTT +  (w,q(k))) –I’m not sure what RTT is. Perhaps S from above. I’m not even sure if the d here is the same as the d above. –If d=RTT, w<-min(2w,w+  (w,q(k))=w+  (w,q(k)) unless  (w,q(k)) is large. The value of  (w,q(k)) seems to be still under investigation. The authors suggest that  (w,q(k)) = some constant is acceptable. In this case, w additively increases, the same as TCP RENO, but if  (w,q(k)) is large, then it increases faster. The 2w acts like a limiter, so the window does not increase too fast. Thus, if  is very large, the window will not increase too fast. However, this is not fair to standard TCP implementations. (2 stacks?)

5 Fast TCP for RTT>d If RTT>d –As RTT increases, (1-  )w +  (w*d/RTT +  (w,q(k)) decreases. –Equilibrium (1-  )w +  (w*d/RTT +  )=w  (w*d/RTT +  =  w w*d/RTT +  = w  = w(1-d/RTT)  = w(RTT-d)/RTT= (RTT-d) w/RTT = excess packets in queue/bandwidth * bandwidth = excess packets in queue (same as TCP Vegas) –w/RTT = bandwidth in equilibrium –To see this note that in equilibrium the w is not changing, so RTT is not changing, but RTT>d. Thus the data rate must be exactly the bottleneck bandwidth –If there are many flows, then w/RTT = f i * BW = fraction of bottleneck bandwidth, but (RTT-d)/BW= excess packets from all flows/BW –  = f i * excess packets from all flows –If the  are the same for each flow, the f i is the same for each flow. –This, of course, only applies in equilibrium. The behavior when RTT is random has not been analyzed. If delay is ignored (?) then exponential convergences can be proved.

6 XCP – explicit control protocol Objective: obtain very fast convergence to steady state (a few RTTs). Idea: the routers provide explicit information to the end hosts. –In the header, the sender includes the current cwnd and current estimate of RTT (i.e., its sending rate). –The header also contains a field that the routers can update to provide explicit feedback information. Call this feedback H_feedback. –The routers hold no flow state. –cwnd = max(cwnd + H_feedback, 1 pkt)

7 XCP – router computation Two components, efficiency controller and fairness controller. The average RTT is computed and control is changed every RTT. –It is difficult to determine how this is done. If the average RTT over each observed packet is calculated, the the estimate is biased due to flows that are sending faster. –If there are many slow flows with long RTT, but a few fast flows with short RTT (these send most of the data), what is the average RTT? The RTT distribution may have a long tail, so the mean might be quite large.

8 The efficiency controller Let S be the spare bandwidth Let Q be the average queue occupancy Let  be the feedback –  =  d S -  Q If S slow down If S>0 and Q=0,  >0 => speed up. The goal is for the total traffic to change by . It does not matter which flows make the change. The fairness controller enforces fairness.

9 The fairness controller Objective: –Increase each flows sending rate by the same amount –Increase the total sending rate by =  T Suppose that the increment to flow i is  *RTT i ^2/cwnd i What is change in sending rate for this flow due to this increment: –  cwnd/RTT =  *RTT i ^2/cwnd i /RTT =  *RTT i /cwnd i Suppose that each flow, over a period of d seconds gets an increment of  *RTT i ^2/cwnd i then the total change in rate is –  all packets observed during period d  *RTT i /cwnd i If we want a total change in rate of  T, set  =  T /  all packets observed during period d RTT i /cwnd i Claim: if  *RTT i ^2/cwnd i, then after d seconds, each flow changes its sending rate by the same amount –A flow send at a rate of cwnd/RTT. –So its total change in sending rate is cwnd i /RTT i * (  *RTT i /cwnd i ) =  – Now suppose that each flows increment is  i =  T *RTT i ^2/(d*cwnd i ) What is the total change in the arrival rate? –  all packets observed during period d  cwnd i / RTT i =  all packets observed during period d  cwnd i / RTT i =

10 Fairness controller Objective –Decrease the total throughput by  T –Decrease each flows sending rate by an amount that is proportional to its cwnd size (?). Set increment to  RTT i Total change in throughput is  –  all packets observed during period d  RTT i / RTT i =   all packets observed during period d –Set  =  T /  all packets observed during period d Change in each flows rate is –- d cwnd i / RTT i *  RTT i = - d cwnd i *  –Which is proportional to


Download ppt "High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is 166666 packets (1500."

Similar presentations


Ads by Google