Presentation is loading. Please wait.

Presentation is loading. Please wait.

RECEIVER-DRIVEN LAYERED MULTICAST

Similar presentations


Presentation on theme: "RECEIVER-DRIVEN LAYERED MULTICAST"— Presentation transcript:

1 RECEIVER-DRIVEN LAYERED MULTICAST
CS598 Fall 2017: October 10, 2017 Advanced Multimedia Systems – Professor Klara Nahrstedt

2 Acknowledgments This deck of slides has been created for educational purposes and a few slides have been borrowed in part or at times in entirety using the below sources: Course Slides from CMSC691M, Fall 2004 at University of Maryland, Baltimore County by Professor Padma Mundur. Course Slides from EE689, Worcester Polytechnic Institute. Conference Slides from SIGCOMM 1996 by the authors. ACM Digital Library Book, “Multicasting on the Internet and its Applications”, Sanjoy Paul. Chapter 26.

3 Today’s Paper Receiver-driven Layered Multicast (RLM)
Keywords: Receiver, Layered, Multicast.

4 Authors Steven McCanne Van Jacobson Martin Vetterli
University of California, Berkeley SIGCOMM 1996, Stanford, CA.

5 Agenda Overview: Problem Statement/Need for RLM.
What is RLM : RLM Design Details Rate adaptation: Server Side OR Client Side Spare Capacity Determination Probing Period Detection Period Which receiver does the join experiment? Adaptation of timers in RLM Evaluation Discussion of Results in the Paper Review/Critique of Paper

6 Problem: Network Heterogeneity
All Networks are not alike. Static Heterogeneity: Receivers & Link Capacities Dynamic heterogeneity: Response to congestion Question: How can we make effective utilization of the Network while running a Real-time, Rate-adaptive, Multimedia application?

7 Problem: Network Heterogeneity
Issues to consider while solving: Want to send to many recipients: Multicasting One bandwidth for all is sub-optimal Too less: Underutilization Too high: Congestion.

8 Approaches Adjust sender rate to network capacity
Not well-defined for multicast network Does not scale well if receivers send feedback Layered structure for delivered content so that receiver can adjust their received quality accordingly.

9 The Layered Approach Router will drop packets upon congestion
Receiver receives only requested channels No explicit signal to sender needed This work’s contribution Explicit exploration of second approach Receiver-driven Layered Multicast (RLM)

10 Network Model for RLM Works with IP Multicast
Best effort (packets may be out of order, lost or arbitrarily delayed) Traffic flows only along links with downstream recipients Group oriented communication (senders do not know of receivers and receivers can come and go) Receivers may specify different senders Known as a session

11 RLM Sessions Each session composed of layers, with one layer per group
Layered encoding is used inter-layer dependency, but higher efficiency Priority-drop at router can improve quality and rewards higher bandwidth users.

12 Layered Video Encoding
One channel per layer Layers are additive Adding more channels improves quality but requires more bandwidth

13 The RLM Protocol Main idea: Finding Optimal Bandwidth/Quality
On congestion, receiver drops a layer On spare capacity, receiver adds a layer Similar to bandwidth probing in TCP, Is there any difference with TCP CC? - TBD

14 Adding and Dropping Layers
Drop layer when packet loss No explicit signal for adding a layer, why? Add a new layer at well-chosen times, Called join experiment If join experiment fails Drop layer, since causing congestion If join experiment succeeds One step closer to proper operating level But join experiments can cause congestion Only want to try when they are likely to succeed How do we know?

15 Join Experiments Use a join timer per layer, short timer while adding is not problematic Exponentially backoff length of the timer whenever adding the layer causes congestion How long should we wait to detect congestion to verify successful join experiment? Detection time

16 Join Experiments

17 Detection Time Hard to estimate, Why?
Start conservatively with large time Increase as needed with failed joins When congestion detected after join, updated detection time to start of join experiment to detection Why does it matter? Convergence time

18 Scaling RLM As number of receivers increases, cost of join experiments increases, why? does not scale well Join experiments of others can interfere Example, R1 tries join at 2 while R2 tries join at 4 Both might decide experiment fails

19 Scaling RLM Partial solution: reduce frequency of join experiments with group size But can take too long to converge to operating level Solution Shared learning

20 Shared Learning Receiver multicasts join experiment intent
If fail, all RL can change timers Upper layer join will repress join experiment Same or lower layer can all try Note: priority drop will interfere. Why?

21 RLM State Machine Td – drop timer Tj – join timer S: Steady State
H: Hysteresis State M: Measurement State D: Drop State S state always has a pending join timer.

22 Evaluation Simulate in NS Want to evaluate scalability
Model video as CBR source at each layer Have extra variance for some ‘think’ time, less than 1 frame delay (But video often bursty! Future work) NS is an event-driven packet simulator.

23 Parameters Bandwidth: 1.5 Mbps
Layers: 6, each 32 x 2m kbps (m = 0 … 5) Start time: random (30-120) seconds Queue management :DropTail Queue Size (20 packets) Packet size (1 Kbyte) Latency (varies) Topology (next slide)

24 Topologies 1 – Delay Scalability 2 – Scalability w/ Session Size
3 – Bandwidth Heterogeneity 4 – Superposition of Independent Sessions

25 Performance Metrics Worse-case lost rate over varying time intervals
Short-term: how bad transient congestion is Long-term: how often congestion occurs Throughput as percent of available BW But will always be 100% eventually No random, bursty background traffic So, look at time to reach optimal Both loss and throughput matter Need to look at both

26 Latency Scalability Results
Topology 1, delay 10 ms Converge to optimal in about 30 seconds Join experiments less than 1 second Get larger as the queue builds up at higher levels Next, vary delay 1ms to 20s and compute loss

27 Latency Scalability Results
2 observations: As the latency increases, we expect the performance to decrease since we take longer to know that loss is happening. (same as intuition) Knee shaped curve at twice the link latency plus queue buildup time. (same as intuition) Window size averaged over 1, 10 and 100 secs

28 Session Scalability Results: Loss
Topology 2, 10ms latencies, 10 minute run 2 observations: Worst-case loss rates are independent of the session size. For the largest sessions, the loss rate is only 1%. Independent of session size Long term around 1%

29 Session Scalability Results: Loss
Intuitively, as the number of receivers grows, we expect longer convergence times since others will suppress its join experiment for long. Experimentally, we see that the convergence is logarithmic which means shared learning is helping! Linear trend suggests logarithmic convergence (sharing is helping more)

30 Bandwidth Heterogeneity Results
Topology 3 Observations: Worst-case loss rates are higher in heterogenous vs homogenous (same as intuition) Slope more on smaller window size : accounted by saying that larger session sizes leads to more collision in join experiments. At larger session sizes, short term congestions are managed by the detection time estimator and hence it does not increase without bound. Bit higher than homogenous Small session matters more because of collisions

31 Many Sessions Results Topology 4, bottleneck Bandwidth and Queue scaled When superposition of different sessions took place, the link utilization converged to optimum bandwidth utilization, but they observes that the BW allotted to each pair was unfair. And converged to 1, but very unfair early on

32 Network Dependencies Requires receiver cooperation
If receiver application crashes, host still subscribed Group maintenance critical Router must handle join and leaves quickly Network allocation may be unfair Should be ‘good’ level for all that share link TCP has same problem AQM (RED +) may help decrease time to detect failed session experiment

33 The Application Build compression format knowing network constraints
Not vice-versa Have a real working application Integrated in vic RLM component is not in ‘fast-path’ since changes slower Done in TCL

34 Summary Multicast Receiver-based performance Layered video
All been done before, but first complete system with performance

35 “Future” Work Compression scheme that can more finely compress layers
Adapt compression to receivers For example, if all high and one low then can compress in two levels RLM with other traffic (TCP) RLM combination with SRM to optimize latency. Improve modeling for layered signal sources. Followed up by a paper in 2006 “Low-complexity video coding for receiver-driven layered multicast”

36 Review/Critique Thought-provoking paper in the context of a day before internet revolution. In evaluation, they cover the limitations of the protocol and do not make far-fetched claims. Reasoning behind topologies were explicitly stated- improved readability of the paper. Reusing existing IP Multicast infrastructure and adding intelligence at the receiver level. Today, many video sources are bursty and the paper leaves that for future evaluation. Evaluation is done on a Network simulator- how this behaves in a real network with cross-traffic and lot of transients is left out. Analysis of protocol constants and the heuristics behind selecting the values could have been given more attention. Performance metric could have included Quality of Experience.

37 Questions/Discussion


Download ppt "RECEIVER-DRIVEN LAYERED MULTICAST"

Similar presentations


Ads by Google