Presentation is loading. Please wait.

Presentation is loading. Please wait.

30 June 2004 1 Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.

Similar presentations


Presentation on theme: "30 June 2004 1 Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit."— Presentation transcript:

1 30 June 2004 1 Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit

2 30 June 2004 2 UK-DTI visit 2 Presentation Outline  CERN’s connectivity to the Internet  DataTAG project overview  Wide Area Networking challenges  Where do we want to be by the start of the LHC in 2007?  Where are we now?

3 30 June 2004 Slide 3 CERN External Networking Main Internet Connections Gen. Purp. North American A&R Connectivity (combined with DataTAG) CERN ATRIUM VTHD / FR NetherLight DataTAG GEANT SWITCH 10Gbps 2.5Gbps 1Gbps 10Gbps 2.5Gbps 10Gbps Network Research CERN Internet Exchange Point Swiss National Research Network General Purpose European A&R connectivity USLIC CIXP

4 Final DataTAG Review, 24 March 2004 4 4 http://www.datatag.org Project partners

5 Final DataTAG Review, 24 March 2004 5 5 DataTAG Mission  EU  US Grid network research  High Performance Transport protocols  Inter-domain QoS  Advance bandwidth reservation  EU  US Grid Interoperability  Sister project to EU DataGRID T rans A tlantic G rid

6 LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5/10 Gbps 0.1–1 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec 10 Gbps Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

7 30 June 2004 UK DTI visit Slide 7 grid for a physics study group Deploying the LHC Grid grid for a regional group les.robertson@cern.ch Tier2 Lab a Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Tier3 physics department    Desktop Germany Tier 1 USA UK France Italy Taipei? CERN Tier 1 Japan The LHC Computing Centre CERN Tier 0

8 10 June 2004 UK-DTI visit Slide 8 Main Networking Challenges Fulfill the, yet unproven, assertion that the network can be « nearly » transparent to the Grid Deploy suitable Wide Area Network infrastructure (50-100 Gb/s) Deploy suitable Local Area Network infrastructure (matching or exceeding that of the WAN) Seamless interconnection of LAN & WAN infrastructures firewall? End to End issues (transport protocols, PCs (Itanium, Xeon), 10GigE NICs (Intel, S2io) where are we today: memory to memory: 6.5Gb/s memory to disk: 1.2MB (Windows 2003 server/NewiSys) disk to disk: 400MB (Linux), 600MB (Windows)

9 10 June 2004 UK-DTI visit Slide 9 Main TCP issues Does not scale to some environments –High speed, high latency –Noisy Unfair behaviour with respect to: –Round Trip Time (RTT) –Frame size (MSS) –Access Bandwidth Widespread use of multiple streams in order to compensate for inherent TCP/IP limitations (e.g. Gridftp, BBftp): –Bandage rather than a cure New TCP/IP proposals in order to restore performance in single stream environments – Not clear if/when it will have a real impact – In the mean time there is an absolute requirement for backbones with: – Zero packet losses, – And no packet re-ordering

10 10 June 2004 UK-DTI visit Slide 10 TCP dynamics (10Gbps, 100ms RTT, 1500Bytes packets) Window size (W) = Bandwidth*Round Trip Time –Wbits = 10Gbps*100ms = 1Gb –Wpackets = 1Gb/(8*1500) = 83333 packets Standard Additive Increase Multiplicative Decrease (AIMD) mechanisms: –W=W/2 (halving the congestion window on loss event) –W=W + 1 (increasing congestion window by one packet every RTT) Time to recover from W/2 to W (congestion avoidance) at 1 packet per RTT: –RTT*Wp/2 = 1.157 hour –In practice, 1 packet per 2 RTT because of delayed acks, i.e. 2.31 hour Packets per second: –RTT*Wpackets = 833’333 packets

11 10 June 2004 UK-DTI visit Slide 11 10G DataTAG testbed extension to Telecom World 2003 and Abilene/Cenic On September 15, 2003, the DataTAG project was the first transatlantic testbed offering direct 10GigE access using Juniper’s VPN layer2/10GigE emulation.

12 Final DataTAG Review, 24 March 2004 12 Internet2 land speed record history (IPv4 & IPv6) period 2000-2003 Impact of a single multi- Gb/s flow on the Abilene backbone

13 Final DataTAG Review, 24 March 2004 13 Internet2 land speed record history (IPv4 & IPv6) period 2000-2004


Download ppt "30 June 2004 1 Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit."

Similar presentations


Ads by Google