Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

2: Transport Layer 31 Transport Layer 3. 2: Transport Layer 32 TCP Flow Control receiver: explicitly informs sender of (dynamically changing) amount of.
By Arjuna Sathiaseelan Tomasz Radzik Department of Computer Science King’s College London EPDN: Explicit Packet Drop Notification and its uses.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
15-744: Computer Networking L-17 Multicast Reliability and Congestion Control.
L-21 Multicast. L -15; © Srinivasan Seshan, Overview What/Why Multicast IP Multicast Service Basics Multicast Routing Basics DVMRP Overlay.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
A Reliable Multicast Framework For Light-Weight Sessions and Application Level Framing Sally Floyd, Van Jacobson, Ching-Gung Liu, Steven McCanne, Lixia.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Transport Layer 3-1 Transport Layer r To learn about transport layer protocols in the Internet: m TCP: connection-oriented protocol m Reliability protocol.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
Computer Networking Lecture 24 – Multicast.
Modeling TCP in Small-Buffer Networks
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Computer Networking Lecture 19 – TCP Performance.
CS 268: Computer Networking L-21 Multicast. 2 Multicast Routing Unicast: one source to one destination Multicast: one source to many destinations Two.
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
CS 268: Multicast Transport Kevin Lai April 24, 2001.
UCB TCP Jean Walrand U.C. Berkeley
Networking. Protocol Stack Generally speaking, sending an message is equivalent to copying a file from sender to receiver.
An Active Reliable Multicast Framework for the Grids M. Maimour & C. Pham ICCS 2002, Amsterdam Network Support and Services for Computational Grids Sunday,
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
CS144 An Introduction to Computer Networks
1 - CS7701 – Fall 2004 Review of: Sizing Router Buffers Paper by: – Guido Appenzeller (Stanford) – Isaac Keslassy (Stanford) – Nick McKeown (Stanford)
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
3: Transport Layer3b-1 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data flow in same connection m MSS: maximum.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
TCP1 Transmission Control Protocol (TCP). TCP2 Outline Transmission Control Protocol.
Advance Computer Networking
Univ. of TehranComputer Network1 Computer Networks Computer Networks (Graduate level) University of Tehran Dept. of EE and Computer Engineering By: Dr.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Chapter 24 Transport Control Protocol (TCP) Layer 4 protocol Responsible for reliable end-to-end transmission Provides illusion of reliable network to.
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
EE689 Lecture 13 Review of Last Lecture Reliable Multicast.
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
15-744: Computer Networking L-15 Multicast Address Allocation and Reliability.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Transport Layer: Sliding Window Reliability
Data Link Layer Flow and Error Control. Flow Control Flow Control Flow Control Specifies the amount of data can be transmitted by sender before receiving.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Principles of reliable data transfer 0.
Computer Networking Lecture 10 – TCP & Routers.
15-744: Computer Networking L-5 TCP & Routers. 2 Fair Queuing Core-stateless Fair queuing Assigned reading [DKS90] Analysis and Simulation of a Fair Queueing.
1 Highlight 1: AER/NCA. 2 Active Multicast Repair Services Receiver Sender Conventional Routers Repair latency is a complete round trip time Link causing.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
CMPE 252A: Computer Networks
Sachin Katti, CS244 Slides courtesy: Nick McKeown
Chapter 3 outline 3.1 transport-layer services
15-744: Computer Networking
Advance Computer Networking
Lecture 19 – TCP Performance
COMP/ELEC 429/556 Introduction to Computer Networks
CS640: Introduction to Computer Networks
TCP Congestion Control
Transport Layer: Congestion Control
Presentation transcript:

Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28

TCP Performance Can TCP saturate a link? Congestion control –Increase utilization until… link becomes congested –React by decreasing window by 50% –Window is proportional to rate * RTT Doesn’t this mean that the network oscillates between 50 and 100% utilization? –Average utilization = 75%??Average utilization = 75%?? –No…this is *not* right!

Single TCP Flow Router without buffers

Summary Unbuffered Link t W Minimum window for full utilization The router can’t fully utilize the link –If the window is too small, link is not full –If the link is full, next window increase causes drop –With no buffer it still achieves 75% utilization

TCP Performance In the real world, router queues play important role –Window is proportional to rate * RTT But, RTT changes as well the window –Window to fill links = propagation RTT * bottleneck bandwidth If window is larger, packets sit in queue on bottleneck link

TCP Performance If we have a large router queue  can get 100% utilization100% utilization –But, router queues can cause large delays How big does the queue need to be? –Windows vary from W  W/2 Must make sure that link is always full W/2 > RTT * BW W = RTT * BW + Qsize Therefore, Qsize > RTT * BW –Ensures 100% utilizationEnsures 100% utilization –Delay? Varies between RTT and 2 * RTT

Single TCP Flow Router with large enough buffers for full link utilization

Example 10Gb/s linecard –Requires 300Mbytes of buffering. –Read and write 40 byte packet every 32ns. Memory technologies –DRAM: require 4 devices, but too slow. –SRAM: require 80 devices, 1kW, $2000. Problem gets harder at 40Gb/s –Hence RLDRAM, FCRAM, etc. Rule-of-thumb makes sense for one flow –Typical backbone link has > 20,000 flows –Does the rule-of-thumb still hold?

If flows are synchronized Aggregate window has same dynamics Therefore buffer occupancy has same dynamics Rule-of-thumb still holds. t

If flows are not synchronized Probability Distribution B 0 Buffer Size

Central Limit Theorem CLT tells us that the more variables (Congestion Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) –Width of Gaussian decreases with –Buffer size should also decreases with

Loss Recovery in Multicast Sender-reliable –Wait for ACKs from all receivers. Re-send on timeout or selective ACK –Per receiver state in sender not scalable –ACK implosion Receiver-reliable –Receiver NACKs (resend request) lost packet –Does not provide 100% reliability –NACK implosion

R1 Implosion S R3R4 R2 21 R1 S R3R4 R2 Packet 1 is lostAll 4 receivers request a resend Resend request

Retransmission Re-transmitter –Options: sender, other receivers How to retransmit –Unicast, multicast, scoped multicast, retransmission group, … Problem: Exposure

R1 Exposure S R3R4 R2 21 R1 S R3R4 R2 Packet 1 does not reach R1; Receiver 1 requests a resend Packet 1 resent to all 4 receivers 1 1 Resend request Resent packet

Ideal Recovery Model S R3R4 R2 2 1 S R3R4 R2 Packet 1 reaches R1 but is lost before reaching other Receivers Only one receiver sends NACK to the nearest S or R with packet Resend request 11 Resent packet Repair sent only to those that need packet R1

Scalable Reliable Multicast Originally designed for wb (whiteboard) Receiver-reliable –NACK-based Every member may multicast NACK or retransmission

R1 SRM Request Suppression S R3 R2 21 R1 S R3 R2 Packet 1 is lost; R1 requests resend to Source and Receivers Packet 1 is resent; R2 and R3 no longer have to request a resend 1 X X Delay varies by distance X Resend request Resent packet

Deterministic Suppression d d d d 3d Time data nack repair d 4d d 2d 3d = Sender = Repairer = Requestor Delay = C 1  d S,R

SRM Star Topology S R2 21 R3 Packet 1 is lost; All Receivers request resends Packet 1 is resent to all Receivers X R4 Delay is same length S R2 1 R3R4 Resend request Resent packet

SRM: Stochastic Suppression data d d d d Time NACK repair 2d session msg Delay = U[0,D 2 ]  d S,R = Sender = Repairer = Requestor

SRM (Summary) NACK/Retransmission suppression –Delay before sending –Delay based on RTT estimation –Deterministic + Stochastic components Periodic session messages –Full reliability –Estimation of distance matrix among members