Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.

Slides:



Advertisements
Similar presentations
GSFC to Alaska Performance Results Tino Sciuto Swales Aerospace ESDIS Network Prototype Lab. NASA GSFC Greenbelt, MD.
Advertisements

Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
Computer Networks Performance Metrics Computer Networks Term B10.
Hui Zhang, Fall Computer Networking TCP Enhancements.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
1 Modeling and Taming Parallel TCP on the Wide Area Network Dong Lu,Yi Qiao Peter Dinda, Fabian Bustamante Department of Computer Science Northwestern.
Client Side Mirror Selection Will Lefevers CS 526 Advanced Internet and Web Systems.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Video Streaming Over Wireless: Where TCP is Not Enough Xiaoqing Zhu, Jatinder Pal Singh and Bernd Girod Information Systems Laboratory Stanford University.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
A TCP With Guaranteed Performance in Networks with Dynamic Congestion and Random Wireless Losses Stefan Schmid, ETH Zurich Roger Wattenhofer, ETH Zurich.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Data Communication and Networks
Reliable Transport Layers in Wireless Networks Mark Perillo Electrical and Computer Engineering.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Transport Level Protocol Performance Evaluation for Bulk Data Transfers Matei Ripeanu The University of Chicago Abstract:
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
Discriminating Congestion Losses from Wireless Losses using Inter- Arrival Times at the Receiver By Saad Biaz,Nitin H.Vaidya Texas A&M University IEEE.
All rights reserved © 2006, Alcatel Accelerating TCP Traffic on Broadband Access Networks  Ing-Jyh Tsang 
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
TCP Enhancement for Random Loss Jiang Wu Computer Science Lakehead University.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
UDT: UDP based Data Transfer Protocol, Results, and Implementation Experiences Yunhong Gu & Robert Grossman Laboratory for Advanced Computing / Univ. of.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
Comparison of Public End-to-End Bandwidth Estimation tools on High-Speed Links Alok Shriram, Margaret Murray, Young Hyun, Nevil Brownlee, Andre Broido,
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
Transport Layer3-1 Announcements r Collect homework r New homework: m Ch3#3,4,7,10,12,16,18-20,25,26,31,33,37 m Due Wed Sep 24 r Reminder: m Project #1.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
CS 164: Slide Set 2: Chapter 1 -- Introduction (continued).
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
NET100 Development of network-aware operating systems Tom Dunigan
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
TCP-Cognizant Adaptive Forward Error Correction in Wireless Networks
1 Capacity Dimensioning Based on Traffic Measurement in the Internet Kazumine Osaka University Shingo Ata (Osaka City Univ.)
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
1. Introduction REU 2006-Packet Loss Distributions of TCP using Web100 Zoriel M. Salado, Mentors: Dr. Miguel A. Labrador and Cesar D. Guerrero 2. Methodology.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
Transmission Control Protocol (TCP) BSAD 146 Dave Novak Sources: Network+ Guide to Networks, Dean 2013.
NET100 Development of network-aware operating systems Tom Dunigan
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Network Coding and Reliable Communications Group Modeling Network Coded TCP Throughput: A Simple Model and its Validation MinJi Kim*, Muriel Médard*, João.
Performance Evaluation of L3 Transport Protocols for IEEE (2 nd round) Richard Rouil, Nada Golmie, and David Griffith National Institute of Standards.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their original slides that accompany the.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Network Congestion Control HEAnet Conference 2005 (David Malone for Doug Leith)
Window Control Adjust transmission rate by changing Window Size
Speaker : Che-Wei Chang
Chapter 3 outline 3.1 Transport-layer services
FAST TCP : From Theory to Experiments
Chapter 3 outline 3.1 Transport-layer services
Presentation transcript:

Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001

Introduction Applications experience network performance from a end customer perspective Providing end-to-end performance has two aspects –Bandwidth Reservation –Performance Tuning We have been working to improve actual end-to- end throughput using Performance Tuning This work allows applications to fully exploit reserved bandwidth

Improve Network Performance Poor network performance arises from a subtle interaction between many different components at each layer of the OSI network stack Physical Data Link Network Transport Application

TCP Bandwidth Limits – Mathis Equation Based on characteristics from physical layer up to transport layer. Hard Limits TCP Bandwidth, Max Packet Loss

Packet Loss and MSS If the minimum link bandwidth between two hosts is OC-12 (622 Mbps), and the average round trip time is 20 msec, the maximum packet loss rate necessary to achieve 66% of the link speed (411 Mbps) is approximately %, which represents only 2 packets lost out of every 100,000 packets. If MSS is increased from 1500 bytes to 9000 bytes (Jumbo frames), limit on TCP BW will rise by a factor of 6.

The Results

Web100 Collaboration

SOURCE: Harimath Sivakumar, Stuart Bailey, Robert L. Grossman. “PSockets: The Case for Application-level Network Striping for Data Intensive Applications using High Speed Wide Area Networks,” SC2000: High-Performance Network and Computing Conference, Dallas, TX, 11/00 Parallel TCP Connections…a clue

Why Does This Work? Assumption is that network gives best effort throughput for each connection But end-to-end performance is still poor, even after tuning the host, network, and application Parallel Sockets are being used in GridFTP, Netscape, Gnutella, Atlas, Storage Resource Broker, etc.

Packet Loss Bolot* found that Random losses are not always due to congestion –local system configuration (txqueuelen in Linux) –Bad cables (noisy) Packet losses occur in bursts TCP throttles transmission rate on ALL packet losses, regardless of the root cause Selective Acknowledgement (SACK) helps, but only so much * Jean-Chrysostome Bolot. “Characterizing End-to-End packet delay and loss in the Internet.”, Journal of High Speed Networks, 2(3): , 1993.

Expression for Parallel Socket Bandwidth

Example MSS = 4418, RTT = 70 msec, p = 1/10000 for all connections Number of Connections Aggregate Bandwidth Mb/sec Mb/sec Mb/sec 44 (100)200 Mb/sec 5 5 (100)250 Mb/sec

Measurements To validate theoretical model, minute transmissions performed from U-M to NASA AMES in San Jose, CA Bottleneck was OC-12, MTU= runs MSS=4366, 1 to 20 sockets 2 runs MSS=2948, 1 to 20 sockets 2 runs MSS=1448, 1 to 20 sockets Iperf used for transfer, Web100 used to collect TCP observations on sender side

Actual: MSS 1448 Bytes

Actual: MSS 2948 Bytes

Actual: MSS 4366 Bytes

Observations Knee in the curve is the point at which aggregate throughput reaches the delay*bandwidth product of the pipe between sender and receiver. On a long link (trans-Atlantic), the pipe can hold a lot of data, since the delay (RTT) is so large. Parallel sockets should only work if there are are no router drops. TCP will try to fill the pipe on a single stream.

Sunnyvale – Denver Abilene Link Initial Tests Yearly Statistics

Abilene Weather Map

Other stuff Internet2 measurements on Cleveland Abilene node Interesting results from ns-2 simulation – is it just a simulation artifact? Working on a loss model for Abilene that differentiates between router drops and random drops

Conclusion High Performance Network Throughput is possible with a combination of host, network and application tuning along with using parallel TCP connections Parallel TCP Sockets mitigate negative effects of packet loss in random congestion regime Effects of Parallel TCP Sockets similar to using larger MSS Using Parallel Sockets is aggressive, but as fair as using large MSS