Low-Latency Adaptive Streaming Over TCP

Slides:



Advertisements
Similar presentations
TCP Variants.
Advertisements

Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 10 TCP/IP Performance over Asymmetric Networks.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
TDC365 Spring 2001John Kristoff - DePaul University1 Internetworking Technologies Transmission Control Protocol (TCP)
Open Issues on TCP for Mobile Computing Ibrahim Matta Computer Science, Boston University Vassilis Tsaoussidis Computer Science, Northeastern University.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
1 Internet Networking Spring 2002 Tutorial 10 TCP NewReno.
Performance Enhancement of TFRC in Wireless Ad Hoc Networks Mingzhe Li, Choong-Soo Lee, Emmanuel Agu, Mark Claypool and Bob Kinicki Computer Science Department.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
Streaming Video Gabriel Nell UC Berkeley. Outline Scalable MPEG-4 video – Layered coding method – Integrated transport-decoder buffer model RAP streaming.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Chapter 12 Transmission Control Protocol (TCP)
1 TCP: Reliable Transport Service. 2 Transmission Control Protocol (TCP) Major transport protocol used in Internet Heavily used Completely reliable transfer.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Copyright © Lopamudra Roychoudhuri
1 TCP - Part II Relates to Lab 5. This is an extended module that covers TCP data transport, and flow control, congestion control, and error control in.
Lecture 9 – More TCP & Congestion Control
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar.
1 Computer Networks Congestion Avoidance. 2 Recall TCP Sliding Window Operation.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Network Coding and Reliable Communications Group Modeling Network Coded TCP Throughput: A Simple Model and its Validation MinJi Kim*, Muriel Médard*, João.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
McGraw-Hill Chapter 23 Process-to-Process Delivery: UDP, TCP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
1 Chapter 24 Internetworking Part 4 (Transport Protocols, UDP and TCP, Protocol Port Numbers)
Streaming Video over TCP with Receiver-based Delay Control
Window Control Adjust transmission rate by changing Window Size
Instructor Materials Chapter 6: Quality of Service
Master’s Project Presentation
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Internet Networking recitation #9
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
Satellite TCP Lecture 19 04/10/02.
Chapter 6 TCP Congestion Control
Klara Nahrstedt Spring 2009
COMP 431 Internet Services & Protocols
Introduction to Congestion Control
CIS, University of Delaware
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
Transport Layer Unit 5.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Process-to-Process Delivery:
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
Chapter 6 TCP Congestion Control
CS640: Introduction to Computer Networks
Internet Networking recitation #10
CS4470 Computer Networking Protocols
TCP Congestion Control
TCP Overview.
The Transport Layer Reliability
Transport Layer: Congestion Control
Process-to-Process Delivery: UDP, TCP
TCP flow and congestion control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Presentation transcript:

Low-Latency Adaptive Streaming Over TCP       - ASHVIN GOEL, CHARLES KRASIC AND JONATHAN WALPOLE Presented by: Jinansh Rupesh Patel(94318155) Shubham Swapnil(15313875)

Introduction Transmission Control Protocol(TCP) TCP is he most common transport protocol on the Internet today, offers several benefits for media stream. TCP handles flow control and packet losses so that applications do not have to explicitly perform packet loss recovery.

TCP-Challenges The application must adapt media quality in response to TCP’s estimate of current bandwidth availability. Otherwise, the stream can be delayed indefinitely when the available bandwidth is below the stream’s bandwidth needs. Adaptive Streaming TCP introduces latency delay at application level -> poor performance.

In this paper Show that the latency introduced by TCP at the application level is not inherent in TCP. A significant part of the latency occurs on the sender side of TCP as a result of throughput-optimized TCP implementations. Then, we develop an adaptive buffer-size tuning technique that reduces this latency. Experiment show variance in throughput so that streaming media has smooth quality. Buffer tuning reduces latency but also reduces throughput. We explore reasons for this and figure out a fix. Then, we develop an adaptive buffer-size tuning technique that reduces this latency. Our approach enables low latency media streaming and helps interactive apps like conferencing. A connection can achieve throughput close to TCP throughput by keeping the send buffer size slightly larger than the congestion window and also achieve significant reduction in latency.

TCP Induce Latency The end-to-end latency in a streaming application consists of two main latencies: (1) application-level latency, that is, latency at the application level on the sender and the receiver side, and (2) protocol latency, which we define as the time difference from a write on the sender side to a read on the receiver side at the application level, that is, socket write to socket read latency. TCP introduces protocol latency in three ways: packet retransmission congestion control sender-side buffering.

Adaptive Send-Buffer Tuning

Blocked Packets Blocked packets can cause far more latency than packet dropping or congestion control ->10-20 RTT. Packet dropping and congestion control cause latencies typically less than 2 RTT. Round trip time(RTT) is the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement(ACK) of that signal to be received.

Method to avoid blocked packet latency To avoid storing blocked packets, the send buffer should store no more than CWND packets. Furthermore, the send buffer should not be smaller than CWND because a buffer smaller than CWND packets is guaranteed to limit throughput. In this case, CWND gets artificially limited by the buffer size rather than a congestion (or flow control) event in TCP. Buffer size follow CWND should minimize latency without affecting stream throughput. Since CWND changes dynamically over time, we call this technique adaptive send-buffer tuning and a TCP connection that uses this technique is a MIN BUF TCP flow

MIN_BUF TCP Block apps from writing data to socket if CWND packets in send buffer. App writes data only when 1 new packet can be admitted. Wait for ACK before new packet can be admitted.

160 ms and 500 ms latency threshold.

Observations Packet dropping event that causes an additional round-trip time delay (100 ms in these experiments) that MIN BUF TCP with ECN does not suffer. these congestion control spikes are small because they occur without packet dropping. Note that TCP with ECN but without MIN BUF drops few packets but still suffers large delays because of blocked packets. MIN BUF TCP removes sender-side buffering latency from the TCP stack so that low-latency applications are allowed to handle buffering themselves. Hence, it has more control and flexibility over what data should be sent and when it should be sent. For instance, if MIN BUF TCP doesn’t allow the application to send data for a long time, the sending side can drop low-priority data. NOTE: Low-latency apps that are non-adaptive or already have low bandwidth won’t benefit here.

Effect on Throughput Another important issue for MIN BUF TCP streams is the throughput of these streams compared with standard TCP. The size of the send buffer in standard TCP is large because it helps TCP throughput. The MIN BUF approach will impact network throughput when standard TCP could have sent a packet but there are no new packets in MIN BUF TCP’s send buffer. This condition occurs because when an ACK arrives, standard TCP has a packet in the send buffer that it can send immediately while MIN BUF TCP has to wait for the application to write the next packet before it can send it. With this loop, system latencies such as preemption and scheduling latency can affect MIN BUF TCP throughput.

Understand Buffer Size ACK Arrival: When an ACK arrives for the first packet in TCP window, the window moves forward and admits a new packet in the window, which is then transmitted by TCP. Buffer admits only 1 additional packet. Delayed ACK: To save bandwidth in the reverse direction, most TCP implementations delay ACKs and send, by default, one ACK for every two data packets received. Hence, each ACK arrival opens TCP’s window by two packets. To handle this case, MIN BUF TCP should buffer two instead of one additional packets. CWND Increase: During steady state, when TCP is in its additive increase phase, TCP probes for additional bandwidth by increasing its window by one packet every round-trip time. Hence, TCP increments CWND by 1. At these times, the ACK arrival allows releasing two packets. With delayed ACKs, three packets can be released. Hence, MIN BUF TCP should buffer three additional packets to deal with this case and delayed ACKs. ACK Compression: Due to a phe- nomenon known as ACK compression that can occur at routers, ACKs can arrive at the sender in a bursty manner. In the worst case, the ACKs for all the CWND packets can arrive together. To handle this case, MIN BUF TCP should buffer 2 * CWND packets (CWND packets in ad- dition to the first CWND packets).

Understand Buffer Size The send buffer is limited to A * CWND + B packets at any given time (A) allows handling any bandwidth reduction that can be caused by ACK compression, as described above, while progressively increasing the second parameter (B) allows taking ACK arrivals, delayed ACKs and CWND increase into account

Evaluation We will compare the protocol latency and throughput of TCP and three MIN BUF configurations: MIN BUF(1, 0), MIN BUF(1, 3) and MIN BUF(2, 0) We chose a MIN BUF(1, 3) stream to take ACK arrivals, delayed ACKs and CWND increase into account and we chose a MIN BUF(2, 0) stream because it takes ACK compression and dropped ACKs into account The vertical lines show the 160 ms and 500 ms delay thresholds. The first threshold represents the requirements of interactive applications such as video conferencing while the second threshold represents the requirements of media control operations

Evaluation Forward Path Reverse Path

Normalize Throughput of the different MIN_BUFF TCP configuration Evaluation Normalize Throughput of the different MIN_BUFF TCP configuration

System Overhead MIN_BUF TCP approach reduces protocol latency but increase system overhead Profiling an event driven application with TCP and MIN_BUF TCP flows MIN BUF TCP flow has slightly more overhead for write calls MIN BUF TCP has significantly more overhead for poll calls The MIN BUF(1, 0) TCP flows are approximately three times more expensive in CPU time compared to standard TCP flows.

System Overhead

Implementation We have implemented the MIN_BUF TCP approach with a small modification to the TCP stack on the sender side A new SO_TCP_MIN BUF option, which limits the send buffer size to A ∗ CWND+MIN(B, CWND) segments (segments are packets of maximum segment size or MSS) at any given time. The send buffer size is at least CWND because A must be an integer greater than zero and B is zero or larger The application is woken up when at least one packet can be admitted in the send buffer. By default A is one and B is zero, but these values can be made larger with the SO_TCP_MIN BUF option

Sack Correction Previous implementation is applicable for a NON SAC TCP implementation For TCP SACK we make a sack correction by adding an additional term sacked_out to A ∗ CWND + MIN(B, CWND) The sacked_out term is maintained by a TCP SACK sender and is the number of selectively acknowledged packets We make the sack correction to ensure that the send buffer limit includes this window and is thus at least CWND + sacked out Without this correction, TCP SACK is unable to send new packets for a MIN BUF flow and assumes that the flow is application limited

Application Level Evaluation The previous section evaluated the protocol latency due to TCP under heavy network load. This section evaluates the timing behavior of a real live streaming application and shows how MIN BUF TCP helps in improving application-level end-to-end latency. We have chosen an open-source adaptive streaming application called Qstream Qstream uses an adaptive media format called scalable MPEG It supports layered encoding of video data that allows dynamic data dropping In a layered encoded stream, data is conceptually divided into layers. A base layer can be decoded into a presentation at the lowest level of quality

Application Level Evaluation Qstream uses an adaptation mechanism called priority-progress streaming (PPS) Adaption Period is how often the sender drops data Within each adaptation period, the sender sends data packets in priority order, from the highest priority to the lowest priority The highest priority data has the base quality while lower priority data encodes higher quality layers The data within an adaptation period is called an adaptation window. Data dropping is performed at the end of the adaptation period

Application Level Evaluation

Evaluation Methodology We use three metrics Latency distribution, sender throughput and dropped windows to evaluate the performance of the Qstream application running under MIN BUF TCP versus TCP Latency distribution is a measure of the end-to-end latency The sender throughput is the amount of data that is transmitted by the sender Dropped windows is the number of entire adaptation windows that are dropped by the sender

Results

Results

Conclusions Low latency streaming allows building adaptive streaming applications that respond to changing bandwidth availability quickly and sufficiently low-latency streaming makes interactive applications feasible Low-latency streaming over TCP is feasible by tuning TCP’s send buffer so that it keeps just the packets that are currently in flight Compared to packet dropping, TCP retransmissions or TCP congestion control, this approach has the most significant effect on TCP induced latency at the application level A few extra packets (blocked packets) help to recover much of the lost network throughput without increasing protocol latency significantly

THANK YOU