Cisco Confidential 1 © 2013-2014 Cisco and/or its affiliates. All rights reserved. Coupling Discrete Time Events to Continuous Time in RMCAT (a.k.a. The.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
TCP--Revisited. Background How to effectively share the network? – Goal: Fairness and vague notion of equality Ideal: If N connections, each should get.
Fair Queueing. Design space Buffer management: –RED, Drop-Tail, etc. Scheduling: which flow to service at a given time –FIFO –Fair Queueing.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Coupling Discrete Time Events to Continuous Time in RMCAT (a.k.a. The Anatomy of a RMCAT RTT) and Reasonable Bounds on the Time Rate of Change in Available.
Practice Questions: Congestion Control and Queuing
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Congestion Dr. Abdulaziz Almulhem. Almulhem©20012 Congestion It occurs when network resources are becoming scarce High demand Over utilized Offered load.
Modeling TCP Throughput Jeng Lung WebTP Meeting 11/1/99.
Transport Layer 3-1 Transport Layer r To learn about transport layer protocols in the Internet: m TCP: connection-oriented protocol m Reliability protocol.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju SAHARA Retreat, June 10-12, 2002.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Networks : TCP Congestion Control1 TCP Congestion Control.
Networks : TCP Congestion Control1 TCP Congestion Control Presented by Bob Kinicki.
TCP Congestion Control
CS :: Fall 2003 TCP Friendly Streaming Ketan Mayer-Patel.
1 A State Feedback Control Approach to Stabilizing Queues for ECN- Enabled TCP Connections Yuan Gao and Jennifer Hou IEEE INFOCOM 2003, San Francisco,
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Data Link Layer We have now discussed the prevalent shared channel technologies  Ethernet/IEEE  Wireless LANs (802.11) We have now covered chapters.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Cyclic Code. Linear Block Code Hamming Code is a Linear Block Code. Linear Block Code means that the codeword is generated by multiplying the message.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Copyright © Lopamudra Roychoudhuri
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Ασύρματες και Κινητές Επικοινωνίες Ενότητα # 11: Mobile Transport Layer Διδάσκων: Βασίλειος Σύρης Τμήμα: Πληροφορικής.
ECE 4110 – Internetwork Programming
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Data Link Layer. Data link layer The communication between two machines that can directly communicate with each other. Basic property – If bit A is sent.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
1 Transport Bandwidth Allocation 3/29/2012. Admin. r Exam 1 m Max: 65 m Avg: 52 r Any questions on programming assignment 2 2.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
1 ICCCN 2003 Modelling TCP Reno with Spurious Timeouts in Wireless Mobile Environments Shaojian Fu School of Computer Science University of Oklahoma.
TCP - Part II.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Internet Networking recitation #9
CIS, University of Delaware
Generalizing The Network Performance Interference Problem
Queue Dynamics with Window Flow Control
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
FAST TCP : From Theory to Experiments
IT351: Mobile & Wireless Computing
CS640: Introduction to Computer Networks
Internet Networking recitation #10
Congestion Control Reasons:
TCP Congestion Control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

Cisco Confidential 1 © Cisco and/or its affiliates. All rights reserved. Coupling Discrete Time Events to Continuous Time in RMCAT (a.k.a. The Anatomy of a RMCAT RTT) and Reasonable Bounds on the Time Rate of Change in Available Capacity November 5, 2014

Cisco Confidential 2 © Cisco and/or its affiliates. All rights reserved. Talk Outline 1. Motivation (Prior Work On Test Plan Capacity Change Design) 2. RMCAT Discrete Time RTT Formalized Fluid-flow (continuous-time) model and rigorous RMCAT RTT definition. 3. Infinitely Fast Capacity Change Downward Unavoidable delay spike caused by infinitely fast capacity change. How to avoid the delay spike (proposal could be presented to IETF). 4. How Quickly ANY RMCAT Design Can Track Capacity Changes Result is independent of algorithm type (“self-clocked” or “rate-based”). 5. Reasonable Assumptions on Time-Rate-of-Change of Capacity Worse-case RTT defines “tracking responsiveness” (w/o predictive component). Squelching mechanisms required (self-clocked schemes do this automatically). TCP Dynamics as a function of their RTT. A reasonable bound on RTCP feed back intervals. 6. Implications for Adaptation with Wireless (WiFi/LTE/etc).

Cisco Confidential 3 © Cisco and/or its affiliates. All rights reserved. Motivation (Discussion at IETF 90) Colin Perkins (Last Call on rmcat-cc-requirements): However, as I noted at IETF 90, I think the draft should also include a secondary requirement to keep delay variation (jitter) down, where possible, since larger delay variation needs larger receiver-side buffers to compensate, increasing overall latency. NADAv2 Large Delay Spikes Colin (and others) believed these might be artifacts of the algorithms

Cisco Confidential 4 © Cisco and/or its affiliates. All rights reserved. A RMCAT Lab Discrete Time RTT (for Seq. No. = Z) Note: Per-packet Feedback (no RTCP yet) Gig 0/ Router After Bottleneck Bottleneck Link Router Before Bottleneck RMCAT Source RMCAT Receiver = Z Receive Timestamp Step 2: Receiver Records Reception of Z = ACK Z Receive Timestamp Step 4: Records Reception of ACK of Z Step 1: Source Sends Z = Z Send Timestamp Bottleneck Direction = ACK Z Send Timestamp Step 3: ACK of Z sent t n-2 t n-1 tntn t k-2 t k-1 tktk time Earliest Possible Feedback Direction of Feedback/ACKs d f1 = Forward Propagation Delay Prior to Bottleneck d f2 = Forward Propagation Delay After Bottleneck d r = Reverse Propagation Delay d(t) = Queue Delay at Continuous Time t Modelled zero receiver processing delay d proc = 0

Cisco Confidential 5 © Cisco and/or its affiliates. All rights reserved. RMCAT LAB: Measurement Framework (Seq. No. = Z) t n-2 t n-1 tntn t k-2 t k-1 tktk time Sequence number Z sent. Sequence number Z received (forward delay calculated here). t ack,Z is the earliest time that the queue measurement is available at source (new rate change can occur here). The time the queue was actually sampled is t queue,Z = (t n +d f1 ). t ack,Z Sequence number Z-1 sent. d f1 Let’s define continuous time t for fluid-flow modelling of the rate adaption component. That is, let (t-t OFFSET ) represet times offset from time t. The effect of the rate change on the queue occurs d f1 after the source made the rate change (a full RTT after the queue was sampled)!

Cisco Confidential 6 © Cisco and/or its affiliates. All rights reserved. RMCAT: Continuous Time Modeling for Control Loop RMCAT Sender: Generates New Rate, r(t) Queue RMCAT Reciever RMCAT Sender: Receiver d f1 d f2 d(t) (≈ d 0 at equilibrium) drdr d send_proc =0 d pcv_proc =0 (no RTCP) r send (t) q(t) = max [ q( t - ) + (r at_queue (t) – C), 0 ] For small ; Queue increases/decreases as difference in rates Also note: r at_queue (t) = r send ( t – d f1 ) d(t) = q(t)/C When queue not emptied, it integrates the difference in rates (1/s in Laplace Domain).

Cisco Confidential 7 © Cisco and/or its affiliates. All rights reserved. Control Loop: Laplace Domain (linearize about equilibrium) RMCAT Sender Queue RMCAT Reciever RMCAT Receiver e- s(df2) P Q (s) = Queue Prediction Part X R (s) = Rate Mechanics Part e- s(dr) e- s(df1) e- s(0) =1 (could model a RTCP mean delay of 100ms) 1 e- s(0) =1 R(s) = X R (s)P Q (s) 1/[sC] Key Control Loop Observations At Equilibrium: LTI System => Doesn’t matter where prediction or rate control is (sender or receiver). Wherever it is, it WILL take a minimum of a RTT 1 to make a difference at the queue. 1 – If the RTCP reporting interval is >> (d f1 +d f2 +d r ), it will dominate. If not, it will look like delay noise to individual delay samples.

Cisco Confidential 8 © Cisco and/or its affiliates. All rights reserved. So … What is the Discrete-Time RMCAT RTT Definition? RMCAT Sender: Generates New Rate, r(t) Queue RMCAT Reciever RMCAT Sender: Receiver d f1 d f2 d(t) (≈ d 0 at equilibrium) drdr d send_proc =0 d pcv_proc =0 Forward Delay = d f1 + { d(t)| }+ d f2 t = t SAMP RMCAT RTT (t i ) | = d f1 + { d(t)| }+ d f2 + d r t = t SAMP (seq no z) t i = t ACK for seq Z received RMCAT RTT ( t i ) | = d f1 + d f2 + d r + d(t – [d x + d f2 + d r ] ), t i = t ACK for seq z rcveived where d x = d(t)| t = t SAMP (seq no z)

Cisco Confidential 9 © Cisco and/or its affiliates. All rights reserved. Queue Delay Variation During Downward Capacity Change Fastest Possible Rate Adaptation Example (imprudently quick) C i C (i+1) Assume r(t) = C i before t 0 t0t0 Assume rate control estimates C (i+1) via the queue sample immediately after t 0 and then sends at C (i+1) (or less) as soon as possible. t 0 + RTT Minimum response time to affect queue. Minimum possible delay spike is caused by (C (i+1) - C i ) * RTT too many bits on queue. Example in IETF RMCAT Test Plan: Capacity: 2500 kbps -> 600kbps Assume RTT: seconds (100 ms one-way) Minimum ADDITIONAL bits on queue before we can do anything about it bits. Queue must empty at 600 kbps rate. Thus minimum delay spike is 633 ms! Note: If we assumed one-way delay of 50 ms, 316 ms is minimum.

Cisco Confidential 10 © Cisco and/or its affiliates. All rights reserved. Queue Delay Variation During Downward Capacity Change Fastest Possible Rate Adaptation Example (imprudently quick) C i C (i+1) r(t) = C i (queue at equilibrium, delay at equilibrium is d 0 ) t0t0 t 0 + RTT Delay at Queue, d(t) d0d0 [C i - C (i+1) ] C (i+1) d acc = m 1 d min_spike = (d acc + d 0 ) Different Candidate Responses integration at queue To lower d min_spike, decrease rate of change to new rate Minimum delay spike NOT caused by candidates! * RTT Accumulated bits delay

Cisco Confidential 11 © Cisco and/or its affiliates. All rights reserved. Talk Outline 1. Motivation (Prior Work On Test Plan Capacity Change Design) 2. RMCAT Discrete Time RTT Formalized Fluid-flow (continuous-time) model and rigorous RMCAT RTT definition. 3. Infinitely Fast Capacity Change Downward Unavoidable delay spike caused by infinitely fast capacity change. How to avoid the delay spike (proposal could be presented to IETF). 4. How Quickly ANY RMCAT Design Can Track Capacity Changes Result is independent of algorithm type (“self-clocked” or “rate-based”). 5. Reasonable Assumptions on Time-Rate-of-Change of Capacity Worse-case RTT defines “tracking responsiveness” (w/o predictive component). Squelching mechanisms required (self-clocked schemes do this automatically). TCP Dynamics as a function of their RTT. A reasonable bound on RTCP feed back intervals. 6. Implications for Adaptation with Wireless (WiFi/LTE/etc).

Cisco Confidential 12 © Cisco and/or its affiliates. All rights reserved. C i C (i+1) t0t0 t 0 + RTT How Quickly Can RMCAT Track Available Capacity Changes? Answer: It Depends on the RTT, Duh! The quickest time a RMCAT flow can possibly “measure” (and “react to”) changes in capacity is bounded by ITS round-trip time. Thus the quickest time it can influence it’s contribution to the queue after a change in capacity is thus bounded by ITS round-trip time. Corollary: RMCAT flows with different RTTs can react to changes on different time scales (which correspond to their individual RTTs). Just like TCP! A reasonable ASSUMPTION for the fastest time-rate-of-change in available capacity is one which could be tracked (i.e., measured and reacted to) by a RMCAT flow with an assumed worst-case RTT and RTCP interval.

Cisco Confidential 13 © Cisco and/or its affiliates. All rights reserved. A reasonable assumption bound for RMCAT time rate of change in available capacity t?t? t ? + RTT C(t) C Local_Max C Max_Change C Local_Min Assume worst-case RMCAT RTT of ~250 ms (¼ sec). Highest capacity change “frequency” that is actionable is f MAX ≈ 4 Hz. Capacity changes faster than this CANNOT be “seen/sensed/measured” and then “actionable” by a RMCAT flow with the “worst-case RTT”. 1 Ditto for ACK-based, “self-clocked”, protocols like TCP, SCReAM too! TCP can’t know to stop within this time either; as they only stop after their ACKs stop. TCP thus “pounds the queue” causing gross overflow events on long RTT connections too! Corollary: “Fast Response” is a matter of the worst-case capacity change assumption - not a fundamental property of “self-clocked” protocols Rebuttal to: 1 – If other information was known (e.g., form), predictive components could react quicker.

Cisco Confidential 14 © Cisco and/or its affiliates. All rights reserved TCP, Delay seen by Voice Packets Near 100% Link Utilization (delay 30 ~ 80 ms) Queue emptied only 4 times* -Serialization delay for TCP packet ~ 11 ms Voice Only RTT Estimate ~ 50ms, fast reaction per unit time TCP Dynamics as a Function of the RTT 1 TCP: Voice Delay, 50 ms BE Queue Loss-Based Rate Control Overfills Queues and CAUSES substantial loss

Cisco Confidential 15 © Cisco and/or its affiliates. All rights reserved % Link Utilization (delay 100 ~ 500 ms) Queue Never Emptied 1 TCP, Delay seen by Voice Packets We have loss with much larger persistent delays! Rate dynamics clearly a function of RTT (including queue delay). If capacity downward occurred quickly within RTT near queue overflow, the TCP (control loop) would have behaved similarly to our RMCAT Candidates. RTT Estimate now includes queuing delay... subsequent rate increase reaction times are slowed. TCP Dynamics as a Function of the RTT 1 TCP: Voice Delay, 500 ms BE Queue

Cisco Confidential 16 © Cisco and/or its affiliates. All rights reserved. ACK-Gated RMCAT Proposals (a la Ericsson) t0t0 t 0 + RTT C(t) C Local_Max C Max_Change C Local_Min On long-RTT connections, ACK-gated protocols will also hit the queue too hard and build up excess delay. They will “stop” quickly – but that is a matter of degree (by comparison to how rate-based designs) – not because of some more beneficial property of ACK-gated protocols.* Once a worst-case RTT assumption has been made, it imposes a theoretical constraint on how quickly ANY RMCAT FLOW can adapt to it. Thus imposing a “hidden assumption” on the time-rate-of-change of capacity ANY RMCAT DESIGN on the worst-case RTT can adapt to. This, in turn, imposes a minimum RTCP spacing constraint: For 250 ms RTT, < π/2 spacing Hz) implies T RTCP ≤ ~ 62.5 ms.

Cisco Confidential 17 © Cisco and/or its affiliates. All rights reserved. Implications for WiFi/LTE/Wireless Adaptation t0t0 t 0 + RTT C(t) C Local_Max C Max_Change C Local_Min On long-RTT connections, ACK-gated protocols will also hit the queue too hard and build up excess delay. They will “stop” quickly – but that is a matter of degree (by comparison to rate-based approaches) – not because of some more beneficial property of “self-clocked” protocols over rate-controlled protocols. Repeated/paraphrased from last slide: Summary: We will need to develop better “squelching conditions” in future enhancements to our present RMCAT designs (i.e., when feedback stops and/or becomes irregular). Complete squelch is only prudent when there is no impairment in feedback path. Wireless challenges now become a known second-order problem; unless we want to limit RTT (not possible) or go to per-packet feedback (non RTCP approaches).

Cisco Confidential 18 © Cisco and/or its affiliates. All rights reserved. Summary 1. RMCAT Discrete Time RTT Formalized Fluid-flow (continuous-time) model and rigorous RMCAT RTT definition. 2. How Quick Can ANY RMCAT Design Can Track Capacity Changes Result is independent of algorithm type (“self-clocked” or “rate-based”). 3. Reasonable Assumptions on Time-Rate-of-Change of Capacity Worse-case RTT defines “tracking responsiveness” (w/o predictive component). Like TCP, dynamics of RMCAT solution will be a function of their RTT. The feedback intervals and flow RTT will determine capacity tracking ability. Bounding feedback intervals and worst-case RTT effectively bounds best-case tracking. 4. Implications for Adaptation with Wireless (WiFi/LTE/etc). Squelching mechanisms required (self-clocked schemes do this automatically).