Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

Similar presentations


Presentation on theme: "Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)"— Presentation transcript:

1 Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)
SABRE: A client based technique for mitigating the buffer bloat effect of adaptive video flows Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

2 What is buffer bloat? Significantly high queuing delays from TCP & large buffers Bottleneck = C bps Server Client RTT TCP sender tries to fill the pipe by increasing the sender window (cwnd) Ideally, cwnd should grow to BDP = C x RTT TCP uses packet loss to detect congestion, and then it reduces its rate Large buffers increase queuing delays and also delays loss events

3 DASH: Dynamic Adaptive Streaming over HTTP
DASH client Manifest HTTP server 350kbps 600kbps 900kbps Buffer 100% 1200kbps Download rate Video is split into short segments Time Initial buffering phase S. Akhshabi et al, “An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP”, MMSys’ 11 Steady state (On/Off)

4 Problem description DASH VoIP Does DASH cause buffer bloat?
Will the quality of VoIP calls get affected by DASH flows? And if yes, how can we solve this problem?

5 Our approach In order to answer the first two questions
We perform experiments on a testbed in the lab to measure the buffer bloat effect of DASH flows We developed a scheme SABRE: Smooth Adaptive BitRatE to mitigate this problem We use the testbed to evaluate our solution

6 Measuring the buffer bloat effect
UDP traffic: 80kbps, pkt=150bytes OTT VoIP traffic iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) HTTP Video server Bottleneck emulator Tail-drop: 256 packets DASH client Adaptive HTTP video flows have a significant effect on VoIP traffic 6

7 Understanding the problem – Why do we get large bursts?
1Gbps 6Mbps TCP is bursty

8 Smooth download driven by the client
Possible solutions Middlebox techniques Active Queue Management (AQM) RED, BLUE, CODEL, etc. RED is on every router but hard to tune Server techniques Rate limiting at the server to reduce burst size Our solution: Smooth download driven by the client

9 Some hidden details Client Playout buffer DASH player recv 1 HTTP GET
Server OS 2 Socket buffer Two data channels In traditional DASH players: while(true) recv 1 and 2 are coupled

10 Smooth download to eliminate bursts
Socket buffer Idea TCP can send a burst of min(rwnd, cwnd) Since we can not control cwnd, then control rwnd rwnd Two objectives Keep socket buffer almost full all the time Not to starve the playout buffer rwnd is a function of the empty space on the receiver socket buffer Client Playout buffer DASH player recv HTTP GET Server OS Socket buffer

11 Keeping the socket buffer full - Controlling recv rate
While(1) recv On On Rate While(timer) recv Off Off T T HTTP GET HTTP GET Client Client Server Server Socket Playout Socket Playout GET S1 GET S1 S1 S1 Off GET S2 GET S2 Bursty S2 S2 Empty socket buffer Off

12 Keeping the socket buffer full
HTTP Pipelining # segments = 1 + Socket buffer size #Segments = 1 + Segment size Client Client Server Server Socket Playout Socket Playout GET S1, S2 GET S1 S1 S1 S1 - First think we do is HTTP pipelining Off S2 GET S3 GET S2 S2 S2 S3 GET S4 Off Socket buffer is always full, rwnd is small

13 Still one more problem Socket buffer level drops temporarily when the available bandwidth drops This results in larger values of rwnd Can lead to large bursts and hence delay spikes Continuous monitoring of the socket buffer level can help Available BW Socket buffer Video bitrate

14 Experimental results We implemented SABRE in the VLC DASH player
OTT VoIP traffic UDP traffic: 80kbps, pkt=150bytes iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) HTTP Video server Bottleneck emulator Tail-drop: 256 packets DASH client We implemented SABRE in the VLC DASH player

15 Single DASH flow - constant available bandwidth
OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time On/Off SABRE

16 Video adaptation: how does SABRE react to variable bandwidth?
Client Playout buffer DASH player recv HTTP GET Server OS Socket buffer Players tries to up-shift to a higher bitrate, but can’t sustain it Socket buffer is full Rate Available BW Video bitrate Socket buffer gets grained, reduce recv rate, down-shift to a lower bitrate Player can support this bitrate, shoot for a higher one Socket buffer is full, can not estimate the available BW Time

17 Single DASH Flow – variable available bandwidth
Rate 6Mbps 3Mbps T=180 T=380 Time (sec) On/Off SABRE

18 Two clients C1 Server C2 Two On/Off clients Two SABRE clients

19 Summary The On/Off behavior of adaptive video players can have a significant buffer bloat effect We designed and implemented a client based technique to mitigate this problem A single On/Off client significantly increases queuing delays Future work: Improve SABRE adaptation logic for the case of a mix of On/Off and SABRE clients Investigate DASH-aware middlebox and server based techniques

20 Thank you! Questions?

21 Backup slides

22 Random Early Detection: Can RED help?
Loss probability P=0 Avg queue size min max Once the burst is on the wire, not much can be done! How can we eliminate large bursts?

23 Single DASH Flow - constant available bandwidth
SABRE

24 Single DASH flow - constant available bandwidth
On/Off SABRE OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time

25 Single DASH Flow – variable available bandwidth
Rate 6Mbps 3Mbps T=180 T=380 Time (sec) On/Off SABRE

26 Single ABR Flow – variable available bandwidth
On/Off SABRE

27 At least one OnOff DASH client significantly increases queuing delays
Two clients At least one OnOff DASH client significantly increases queuing delays

28 Two clients


Download ppt "Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)"

Similar presentations


Ads by Google