Presentation is loading. Please wait.

Presentation is loading. Please wait.

TransPAC HPCC Engineer

Similar presentations


Presentation on theme: "TransPAC HPCC Engineer"— Presentation transcript:

1 TransPAC HPCC Engineer
TCP performance John Hicks TransPAC HPCC Engineer Indiana University APAN 19 Meeting – Bangkok, Thailand 27-January-2005

2 Standard TCP Low performance on fast long distance paths
AIMD (add a=1 pkt to cwnd / RTT, decrease cwnd by factor b=0.5 in congestion) RTT (~70ms) RTT ms Reno Throughput Mbps 700 1200 s Information courtesy of Les Cottrell from the SLAC group at Stanford

3 Standard TCP problems It has long been recognized that TCP does not provide good performance for applications on networks with a high bandwidth delay product. Some things that contribute to this include: Slow linear increase by one packet per RTT. Too drastic multiplicative decrease. Packet level oscillation due to packet loss.

4 Standard TCP improvements
One approach to improving TCP performance is to adjust the TCP window size to be the bandwidth delay product of the network. This approach usually requires a network expert. Hard to achieve in practice. Another approach is to stripe TCP over several standard TCP network connections. This is approach plateaus as the number of sockets increase.

5 Improving data rates There are basically three categories of projects to improve data transfer rates: Rate limited Examples include: (Robert Grossman) SABUL, RBUDP, and (Steve Wallace) Tsunami. Congestion window limited Examples include: (Steven Low ) FAST, (Tom Kelly) Scalable, (Sally Floyd) Highspeed, and (Injong Rhee) BicTCP. Hardware changes Examples include: XCP One proving ground for these activities is the Bandwidth Challenge at the Supercomputing Conference

6 One of the SC04 BWC winners
All Roads Lead Through Chicago to Pittsburgh Performance Award: High Speed TeraByte Transfers for Physics California Institute of Technology, Stanford Linear Accelerator Lab and Fermi National Lab. Over 100 Gb/s per second aggregate memory to memory bandwidth utilizing the greatest number of networks “Caltech, SLAC, Fermilab, CERN, Florida and Partners in the UK, Brazil and Korea Set 101 Gigabit Per Second Mark During the SuperComputing 2004 Bandwidth Challenge” This group used FAST TCP as a transfer mechanism.

7 FAST TCP Based on TCP Vegas
Uses both queuing delay and packet losses as congestion measures Developed at Caltech by Steven Low and collaborators Code available at: Information courtesy of Les Cottrell from the SLAC group at Stanford

8 FAST TCP at SC04

9 Scalable TCP Uses exponential increase everywhere (in slow start and congestion avoidance) Multiplicative decrease factor b = 0.125 Introduced by Tom Kelly of Cambridge Information courtesy of Les Cottrell from the SLAC group at Stanford

10 Highspeed TCP Behaves like Reno for small values of cwnd
Above a chosen value of cwnd (default 38) a more aggressive function is used Uses a table to indicate by how much to increase cwnd when an ACK is received Available with web100 Introduced by Sally Floyd Information courtesy of Les Cottrell from the SLAC group at Stanford

11 Binary Increase Control TCP (BIC TCP)
Combine: An additive increase used for large cwnd A binary increase used for small cwnd Developed by Injong Rhee at NC State University Information courtesy of Les Cottrell from the SLAC group at Stanford

12 For More Information Supercomputing 2004, Bandwidth Challenge
FAST TCP Scalable TCP Highspeed TCP Binary Increase Control (BIC) TCP

13 Thank you John Hicks Indiana University


Download ppt "TransPAC HPCC Engineer"

Similar presentations


Ads by Google