Presentation is loading. Please wait.

Presentation is loading. Please wait.

Purdue University, Georgia Institute of Technology, AT&T Labs Research

Similar presentations


Presentation on theme: "Purdue University, Georgia Institute of Technology, AT&T Labs Research"— Presentation transcript:

1 Purdue University, Georgia Institute of Technology, AT&T Labs Research
Tango: Toward a More Reliable Mobile Streaming through Cooperation between Cellular Network and Mobile Devices Nawanol Theera-Ampornpunt, Tarun Mangla, Saurabh Bagchi, Rajesh Panta, Kaustubh Joshi, Mostafa Ammar, and Ellen Zegura Purdue University, Georgia Institute of Technology, AT&T Labs Research

2 Motivation Mobile devices treat cellular network as black box
Communicate If the device and the network communicate, we can improve user experience in some applications. In this work, we focus on improving audio streaming when there is congestion in the network. Mobile Device Application Server Cellular Network

3 Audio Streaming Pandora model – songs are chosen by the service
Online audio streaming where next songs are known We focus on audio streaming services such as Pandora or other services where the next songs are known in advance. This gives us the opportunity to pre-cache a lot of content when the situation requires it.

4 Buffer Size Tradeoff Large buffer -> more resilient to congestion
Small buffer -> lower bandwidth waste in case user abandons the stream Ideal: small buffer when connectivity is good; large buffer when congestion is expected An important design decision every streaming client needs to make is the buffer size. Larger buffer means the playback is more resilient to temporary connectivity degradation, for example due to congestion. On the other hand, when the user abandons the stream or ends their session, the content in the buffer is wasted, so we also want to keep the buffer small. Ideally, we want the buffer to be small when the connectivity is good, and large when we expect the connectivity to be bad. This is only possible when the device and the network communicate.

5 Data Pre-caching Service
Runs inside cellular network Monitors user’s movement trajectory Sends an alert to streaming application when user is predicted to enter a congested area Application significantly increases buffer size to mitigate effect of congestion Therefore, we came up with the data pre-caching service, which streaming applications can register for. The service runs inside (close to) the cellular network, which provides real-time data to the service.

6 Overview Offline phase Online phase Mobility prediction model training
User location prediction Network load monitoring Here is the overview of the data pre-caching service. During the offline phase, user location data is used to train mobility prediction model. During the online phase, the service predicts user’s location continuously and monitor the network load at the predicted location. When a user is predicted to enter a congested area, a pre-caching alert is sent to the application on the mobile device. [NodeB = cell tower in 3G, eNodeB = cell tower in 4G LTE, RNC = Radio Network Controller] Online Phase Offline Phase Legend

7 Mobility Prediction Model
Operates at cell sector level Estimates probability of entering cell C in the next u minutes given past trajectory Based on simple conditional probability: P(enter cell C | trajectory) = Freq(enter cell C, trajectory) / Freq(trajectory) Gets counts for each unique trajectory from trace data Our model uses cell ID as input instead of GPS coordinates in order to avoid additional energy overhead. This information is already available at the cellular network as part of normal operation.

8 Mobility Prediction Accuracy
(number of past cells in trajectory) This plot shows the accuracy of mobility prediction in terms of precision and recall (higher is better), with varied history length. History length of zero means the prediction is based only on the current cell. While higher history length gives better accuracy, going from 1 to 2 gives only a small increase, so we decided to go with the simpler model with history length of 1. Although the accuracy is not very high, we will see that this is enough to provide significant benefits compared to the current state of the art.

9 Trace-driven Simulations
We rely on simulations to estimate benefits of pre-caching service for large number of users Audio streaming client emulator keeps track of current song position, buffer level, user location, etc. Emulated cells have fixed capacity, with background traffic from real traces

10 Simulated Congestion Simulated congested cells have capacity of zero
Three simulations, each with one type of congestion: Static congestions – congestion in 20% of cells for the whole duration Random congestions – congestion in 50% of cells lasting 0-20 minutes Flash crowds – 50 congestions that move like a user Because congestion is not present in the traces we use, we simulate congestion by setting capacity of congested cells to zero.

11 Approaches Compared Baseline: fixed buffer size and no bit-rate adaptation MPEG-DASH: fixed buffer size w/ bit-rate adaptation Tango: dynamic buffer size w/ and w/o bit-rate adaptation w/ and w/o perfect location predictor

12 Results – Pause Time (1) First, we look at the pause time due to rebuffering at various numbers of audio streaming users with static congestions. X-axis is the number of audio streaming users. Y-axis is the pause time in percentage of total playback time (lower value is better). BA refers to bit-rate adaptation, and PLP refers to perfect location predictor. We can see that the three approaches that employ bit-rate adaptation can keep the pause time almost constant as the number of users increases. However, with or without bit-rate adaptation, TANGO gives a significant reduction compared to the corresponding baseline. TANGO with perfect location predictor performs slightly better than TANGO.

13 Results – Pause Time (2) With random congestions, the general trend is similar to static congestions. However, one key difference is that TANGO with perfect location predictor barely performs better than the variants with actual location predictor. This is due to the fact that random congestions are less predictable than static congestions. The location predictor only predicts future location, but not whether a cell will become congested in the future. Therefore, even with perfect location predictor, incorrect decisions are still made. With imperfect location predictor, false positives can sometimes be beneficial as congestion spring up unexpectedly.

14 Results – Pause Time (3) With flash crowds, congestions are even less predictable than random congestions. This results in lower benefit from TANGO compared to baseline and DASH. Still, TANGO still performs better than baseline and DASH regardless of the number of audio streaming users.

15 Results – Average Stream Bit-rate
Next, we look at the quality of the audio stream. This is measured as the average stream bit-rate. For approaches without bit-rate adaptation the bit-rate is fixed at 128 Kbps. TANGO with bit-rate adaptation has slightly lower quality, while DASH lowers the quality significantly more than TANGO. So, compared to DASH, TANGO gives lower pause time while keeping the stream quality higher.

16 Results – Buffer size vs. Pause Time
This plot shows the effect of varying the buffer size on the pause time. Note that for TANGO, this buffer size is the default buffer size used when there is no congestion. When the user is predicted to enter a congested area, the buffer size becomes roughly 10 songs. We can see that in order to match TANGO’s performance, the buffer size needs to be increased to 4 songs for DASH and 7 songs for baseline. This leads to significantly more bandwidth waste when the user abandons the stream or ends their session.

17 Conclusion We propose Tango, a framework that enables cooperation between mobile devices and cellular network We introduce data pre-caching service that notifies streaming application of impending congestion Trace-based simulations show service reduces pause time by 13-72% depending congestion type and number of users

18 Questions?


Download ppt "Purdue University, Georgia Institute of Technology, AT&T Labs Research"

Similar presentations


Ads by Google