Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multicast Congestion Control in the Internet: Fairness and Scalability

Similar presentations


Presentation on theme: "Multicast Congestion Control in the Internet: Fairness and Scalability"— Presentation transcript:

1 Multicast Congestion Control in the Internet: Fairness and Scalability
Sponsored by Tektronix and the Schlumberger Foundation technical merit award Chin-Ying Wang Advisor: Sonia Fahmy Department of Computer Sciences Purdue University

2 Overview What is Multicasting? PGM PGMCC Feedback Aggregation Fairness
Conclusions and Ongoing Work

3 What is Multicasting? Multicasting:
= group member Multicasting: allows information exchange among multiple senders and multiple receivers Popular applications include: audio/video conferencing, distributed games, distance learning, searching, server and database synchronization, and many more

4 How does Multicasting Work?
datagram S Router Router feedback R R A single datagram is transmitted from the sending host This datagram is replicated at network routers and forwarded to interested receivers via multiple outgoing links Using multicast connections  traffic and management overhead not  number of participants If reliability is required, receivers provide feedback to notify the sender whether the data is received

5 The Feedback Implosion Problem
S = Sender R = Receiver = data Router = ACK/NAK Router R S Feedback implosion R

6 The Congestion Control Problem
How should the sender determine the sending rate? R ? S Router Router R 500 Kb/s R 1000 Kb/s 300 Kb/s 750 Kb/s

7 Our Goals To study the impact of feedback aggregation on a promising protocol, the PGMCC multicast congestion control protocol To evaluate PGMCC performance when competing with bursty traffic in a realistic Internet-like scenario Ultimately, to design more scalable and more fair multicast congestion control techniques

8 Multicast Congestion Control
?=300 Kb/s 500 Kb/s 1000 Kb/s S Router Router 300 Kb/s 750 Kb/s R3 R1 Single-rate schemes: Sender adapts to the slowest receiver TCP-like service: one window/rate for all the receivers Limitations: Underutilization on some links Selects the slowest receiver in the group (“crying baby syndrome”)

9 The PGM Multicast Protocol
PGM: Pragmatic General Multicast Single sender and multiple-receiver multicast protocol Reliability: NAK based retransmission requests Scalability: feedback aggregation and selective repair forwarding Suppress replicated NAKs from the same sub-tree in each router

10 PGM NAK/NCF Dialog Router Router Router
Subnet NCF Router NCF NAK NCF ODATA NAK NCF Router RDATA NAK PGM Receivers NAK NCF Subnet NAK Subnet Router NCF PGM Sender NAK PGM Receiver See [Miller1999] and RFC for more details.

11 PGMCC [Rizzo2000] Use TCP throughput approximation to decide on the group representative, called “ACKer” Update acker to I when T(I) < cT(J) 300 Kb/s RJ Current acker S Router Router RK 500 Kb/s RI 1000 Kb/s 300 Kb/s 750 Kb/s Newly joined receiver whose throughput T(I) < c× current acker’s throughput T(J)

12 PGMCC (cont’d) Attempts to be TCP-friendly, i.e., on the average, no more aggressive than TCP ACKs are used between the sender and acker TCP-like increase and decrease Throughput of each receiver is computed as a function of fields in NAK packets: Round Trip Time (RTT) Packet loss

13 Feedback Aggregation Experimental Topology
Goal: To determine if there are unnecessary/missing acker switches due to feedback aggregation PR1 PR3 20 % loss PS Router Router Ns-2 Simulator is used. All links are 10 Mb/s with 50 ms delay. 25 % loss PR4 PR2

14 Feedback Aggregation Experimental Result

15 PGMCC Fairness Simulate PGMCC in a realistic scenario similar to the current Internet The objective is to determine whether PGMCC remains TCP friendly in this scenario Different bottleneck link bandwidths are used in the simulation: Highly congested network Medium congestion Non-congested network

16 General Fairness (GFC-2) Experimental Topology
PS S15 S16 S17 S10 S14 S20 S18 S4 S21 S0 S1 S11 S5 S3 S13 S19 D5 S7 S2 S12 D4 router5 router1 router3 S6 S8 D13 D3 router0 D14 router4 router2 router6 D0 D2 D9 D12 D7 D1 S9 D15 D20 D21 D6 Link0 Link1 Link3 Link2 Link4 Link5 D8 D10 D11 D16 PR2 D19 D18 D17 PR1 PR3 PR4 PR5

17 Topology (cont’d) 22 source nodes (S*) and 22 destination nodes (D*)
NewReno TCP connection is run between each pair of source and destination nodes One UDP flow sending Pareto traffic runs across Link4 with a 500 ms on/off interval All simulations were run for 900 seconds TCP connection traced runs from S4 to D4

18 Topology (cont’d) Link bandwidth between each node and router is 150 kbps with 1 ms delay Link bandwidths and delays between routers are: Link0 Link1 Link2 Link3 Link4 Link5 Bandwidth (kbps) 50 100 150 Delay (ms) 20 10 5

19 Highly Congested Network
PGM has a higher throughput in the first 50 seconds Afterwards, PGM has very low throughput due to time-outs

20 Medium Congestion Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 2.5 and 3.5 times the bandwidth in “highly congested” network PGM flow outperforms TCP during initial acker switching TCP has higher throughput when the timeout interval at PGM sender does not adapt to the increase of the acker RTT

21 Medium Congestion (cont’d)
Bandwidth = 2.5×”Congested”

22 Medium Congestion (cont’d)
Bandwidth = 3.5×”Congested”

23 Non-congested Network
Maintain all simulation parameters unchanged except increasing the link bandwidth between routers from 10 and 80 times the bandwidth in highly congested network PGM flow outperforms TCP flow as the bandwidth increases Frequent acker switches cause the increase of the PGMCC sender’s window The RTT of the PGMCC acker is shorter than the TCP flow RTT at many instances

24 Non-congested Network (cont’d)
Bandwidth = 10×”Congested”

25 Non-congested Network (cont’d)
Bandwidth = 80×”Congested”

26 Main Results Feedback aggregation:
Results in incorrect acker selection with PGMCC Problem is difficult to remedy without router assistance PGMCC fairness in realistic scenarios: Initial acker switches causes the PGM flow to outperform the TCP flow due to the steep increase of the PGM sending window A TCP-like retransmission timeout is needed to avoid the PGM performance degradation caused by using a fixed timeout interval

27 Ongoing Work Conduct Internet experiments with various reliability semantics (e.g., unreliable and semi-reliable transmission) and examine their effect on PGMCC, especially on acker selection with insufficient NAKs Exploit Internet tomography in multicast and geo-cast application-layer overlays [NOSSDAV2002, ICNP2002]


Download ppt "Multicast Congestion Control in the Internet: Fairness and Scalability"

Similar presentations


Ads by Google