Autotuning in Web100 John W. Heffner August 1, 2002 Boulder, CO.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

EE384Y: Packet Switch Architectures
The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works.
1 On the Long-Run Behavior of Equation-Based Rate Control Milan Vojnović and Jean-Yves Le Boudec ACM SIGCOMM 2002, Pittsburgh, PA, August 19-23, 2002.
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
Reconsidering Reliable Transport Protocol in Heterogeneous Wireless Networks Wang Yang Tsinghua University 1.
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
International Telecommunication Union Workshop on End-to-End Quality of Service.What is it? How do we get it? Geneva, 1-3 October 2003 Are Existing Performance.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
0 - 0.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
MULTIPLYING MONOMIALS TIMES POLYNOMIALS (DISTRIBUTIVE PROPERTY)
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
When TCP Friendliness Becomes Harmful Amit Mondal Aleksandar Kuzmanovic Northwestern University
1 Improving TCP/IP Performance Over Wireless Networks Authors: Hari Balakrishnan, Srinivasan Seshan, Elan Amir and Randy H. Katz Presented by Sampoorani.
Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
TCP Sliding Windows, Flow Control, and Congestion Control Lecture material taken from Computer Networks A Systems Approach, Fourth Ed.,Peterson and Davie,
Utility Optimization for Event-Driven Distributed Infrastructures Cristian Lumezanu University of Maryland, College Park Sumeer BholaMark Astley IBM T.J.
LOGO Transmission Control Protocol 12 (TCP) Data Flow.
Routing and Congestion Problems in General Networks Presented by Jun Zou CAS 744.
1 School of Computing Science Simon Fraser University CMPT 771/471: Internet Architecture & Protocols TCP-Friendly Transport Protocols.
RED-PD: RED with Preferential Dropping Ratul Mahajan Sally Floyd David Wetherall.
1 Specifying New Congestion Control Algorithms Sally Floyd and Mark Allman draft-floyd-cc-alt-00.txt November 2006 TSVWG Slides:
Addition 1’s to 20.
25 seconds left…...
Test B, 100 Subtraction Facts
Week 1.
We will resume in: 25 Minutes.
Network Operations & administration CS 4592 Lecture 15 Instructor: Ibrahim Tariq.
11-1 FRAMING The data link layer needs to pack bits into frames, so that each frame is distinguishable from another. Our postal system practices a type.
On Individual and Aggregate TCP Performance Lili Qiu Yin Zhang Srinivasan Keshav Cornell University 7th International Conference on Network Protocols Toronto,
Peter Key, Laurent Massoulie, Don Towsley Infocom 07 presented by Park HoSung 1 Path selection and multipath congestion control.
TCP Vegas LAWRENCE S. BRAKMO SEAN W. O’MALLEY LARRY L. PETERSON PRESENTED TCP VEGAS IN 1994 PRESENTED BY CHUNG TRAN.
BZUPAGES.COM 1 User Datagram Protocol - UDP RFC 768, Protocol 17 Provides unreliable, connectionless on top of IP Minimal overhead, high performance –No.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Junxian Huang 1 Feng Qian 2 Yihua Guo 1 Yuanyuan Zhou 1 Qiang Xu 1 Z. Morley Mao 1 Subhabrata Sen 2 Oliver Spatscheck 2 1 University of Michigan 2 AT&T.
TCP Enhancement for Random Loss Jiang Wu Computer Science Lakehead University.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
Copyright © Lopamudra Roychoudhuri
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Web100 Basil Irwin National Center for Atmospheric Research Matt Mathis Pittsburgh Supercomputing Center Halloween, 2000.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
TCP - Part II.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Transmission Control Protocol (TCP) Retransmission and Time-Out
Transport Protocols over Circuits/VCs
Understanding Throughput & TCP Windows
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Automatic TCP Buffer Tuning
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
CS640: Introduction to Computer Networks
Department of Informatics Networks and Distributed Systems (ND) group
Network Performance Definitions
Presentation transcript:

Autotuning in Web100 John W. Heffner August 1, 2002 Boulder, CO

2 A Senior Thesis Wrote senior honors thesis on autotuning. Since integrated the code into the Web100 kernel. Currently in CVS, not released. This talk is mainly based on that work. My thesis can be found (in PS or PDF) at

3 The Problem (in a nutshell) Network bandwidth is increasing exponentially, TCP is not keeping up. One reason: TCP queues require space proportional to bandwidth, buffers are statically sized. –Can use large static sizes, but not a general solution.

4 Goal Understand TCP queuing. Implement network-based queue sizes. No arbitrary limits.

5 TCP Window A window is the amount of data TCP keeps in flight on a connection. TCP sends one window of data per round-trip time. Assume RTT constant. Throughput proportional to window size.

6 The BDP If full path bandwidth is used, window = bandwidth * RTT. This is known as the Bandwidth-Delay Product (BDP). Path bandwidths follow a variant of Moores Law, increasing exponentially with time. Round-trip times are bounded by the speed of light. Therefore, BDPs and window sizes increase exponentially with time.

7 TCP Window Limits Window size bounded by: –Sending application –Congestion window –Announced receive window (receiver problem) –Retransmit queue buffer size (sender problem) Improper resource allocation may cause last two to improperly limit window.

8 The Receiver Problem The receive window is primarily a flow control device. Standard sockets-based TCPs have a per-connection receive buffer of a fixed size. Uses receive window to ensure this buffer does not overflow. When receive buffer is too small, throughput is limited.

9 Example: rwin too small

10 Large Receive Buffer Why not just set the receive buffer very large? Flow control will break!

11 Re-think Receive Buffer Do we need strict per-connection memory limits? Not really. –Note: implications for possible DOS attacks not fully explored. Announce window based on observed network properties, not memory limits. Dont have to worry about protocol and queuing overhead space.

12 Measuring Network from Receiver Use the DRS algorithm [Wu Feng, Mike Fisk] –Measure window as bytes received in RTT, use twice this value in announcement calculation. –Bound RTT by measuring time taken to receive one window. Additional RTT measurement using timestamps. –May be able to track RTT variations better.

13 Announcing Receive Window New variables: –rcv_space: twice the most recently measured window (DRS) –rcv_alloc: the amount of unread in-order data queued –ofo_alloc: the amount of out-of-order data queued rwin = rcv_space – rcv_alloc + ofo_alloc

14 Behavior Trace

15 Receiver Test Results

16 The Sender Problem TCP must keep at least one window of data in the retransmit queue. Why not just use a large send buffer? Bulk transfer applications will fill any send buffer, and may waste a large amount of memory per connection.

17 Autotuning 98 Semke, et. al. at PSC did original Auto- tuning in NetBSD, published in SIGCOMM 98. Automatically tunes sndbuf to at least 2*cwnd. Implemented in Linux 2.4, though it still imposes a (small) per-connection limit.

18 A Different Way Do we need a per-connection send buffer limit? Yes. Does the retransmit queue have to be in the send buffer? No. Just separate the retransmit queue, and do not limit it.

19 Benefits Dont have to worry about queuing overhead space Saves some memory Natural, simple implementation

20 Sender Test Results

21 Future Work Understand and defend against possible attacks. Demonstrate or improve robustness. –DRS fails when routing queue sizes vary due to other flows.