Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time.

Similar presentations


Presentation on theme: "Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time."— Presentation transcript:

1 Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones University of Manchester In Collaboration with: Bryan Caron University of Alberta Krzysztof Korcyl IFJ PAN Krakow Catalin Meirosu Politehnica University of Bucuresti & CERN Jakob Langgard Nielsen Niels Bohr Institute

2 Slide: 2 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 2 Introduction Poster: On the potential use of Remote Computing Farms in the ATLAS TDAQ System

3 Slide: 3 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 3 Atlas Computing Model Tier2 Centre ~200kSI2k Trigger & Event Builder Event Filter ~7.5MSI2k UK Regional Centre (RAL) US Regional Centre French Regional Centre Dutch Regional Centre SheffieldManchest er Liverpool Lancaster ~0.25TIPS 10 GByte/sec 320 MByte/sec 100 - 1000 MB/s links Physics data cache ~PByte/sec ~ 75MB/s/T1 for ATLAS Tier2 Centre ~200kSI2k 622Mb/s – 1 Gbit/s links Tier 0 Tier 1 Desktop PC (2004) = ~1 kSpecInt2k Northern Tier ~200kSI2k Tier 2 ~5 PByte/year no simulation ~2 PByte/year/T1 ~200 TByte/year/T2 CERN Center PBytes of Disk; Tape Robot uHigh Bandwidth Network uMany Processors uExperts at Remote sites Remote institute filtering calibration monitoring

4 Slide: 4 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 4 Remote Computing Concepts ROB L2PU SFI PF Local Event Processing Farms ATLAS Detectors – Level 1 Trigger SFOs Mass storage Experimental Area CERN B513 Copenhagen Edmonton Krakow Manchester PF Remote Event Processing Farms PF lightpaths PF Data Collection Network Back End Network GÉANT Switch Level 2 Trigger Event Builders

5 Slide: 5 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 5 ATLAS Remote Farms – Network Connectivity

6 Slide: 6 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 6 ATLAS Application Protocol u Event Request EFD requests an event from SFI SFI replies with the event ~2Mbytes u Processing of event u Return of computation EF asks SFO for buffer space SFO sends OK EF transfers results of the computation u Tcpmon - instrumented tcp request-response program emulates the Event Filter EFD to SFI communication. Send OK Send event data Request event ●●● Request Buffer Send processed event Process event Time Request-Response time (Histogram) Event Filter Daemon EFD SFI and SFO

7 Slide: 7 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 7 Networks and End Hosts

8 Slide: 8 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 8 End Hosts & NICs CERN-nat-Manc. Request-Response Latency Throughput Packet Loss Re-Order uUse UDP packets to characterise Host, NIC & Network SuperMicro P4DP8 motherboard Dual Xenon 2.2GHz CPU 400 MHz System bus 64 bit 66 MHz PCI / 133 MHz PCI-X bus uThe network can sustain 1Gbps of UDP traffic uThe average server can loose smaller packets uPacket loss caused by lack of power in the PC receiving the traffic uOut of order packets due to WAN routers uLightpaths look like extended LANS have no re-ordering

9 Slide: 9 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 9 Using Web100 TCP Stack Instrumentation to analyse application protocol - tcpmon

10 Slide: 10 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 10 tcpmon: TCP Activity Manc-CERN Req-Resp Round trip time 20 ms 64 byte Request green 1 Mbyte Response blue TCP in slow start 1st event takes 19 rtt or ~ 380 ms TCP Congestion window gets re-set on each Request TCP stack implementation detail to reduce Cwnd after inactivity Even after 10s, each response takes 13 rtt or ~260 ms Transfer achievable throughput 120 Mbit/s

11 Slide: 11 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 11 tcpmon: TCP activity Manc-cern Req-Resp TCP stack tuned Round trip time 20 ms 64 byte Request green 1 Mbyte Response blue TCP starts in slow start 1 st event takes 19 rtt or ~ 380 ms TCP Congestion window grows nicely Response takes 2 rtt after ~1.5s Rate ~10/s (with 50ms wait) Transfer achievable throughput grows to 800 Mbit/s

12 Slide: 12 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 12 Round trip time 150 ms 64 byte Request green 1 Mbyte Response blue TCP starts in slow start 1 st event takes 11 rtt or ~ 1.67 s tcpmon: TCP activity Alberta-CERN Req-Resp TCP stack tuned TCP Congestion window in slow start to ~1.8s then congestion avoidance Response in 2 rtt after ~2.5s Rate 2.2/s (with 50ms wait) Transfer achievable throughput grows slowly from 250 to 800 Mbit/s

13 Slide: 13 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 13 SC2004 Disk-Disk bbftp ubbftp file transfer program uses TCP/IP uUKLight: Path:- London-Chicago-London; PCs:- Supermicro +3Ware RAID0 uMTU 1500 bytes; Socket size 22 Mbytes; rtt 177ms; SACK off uMove a 2 Gbyte file uWeb100 plots: uStandard TCP uAverage 825 Mbit/s u(bbcp: 670 Mbit/s) uScalable TCP uAverage 875 Mbit/s u(bbcp: 701 Mbit/s ~4.5s of overhead) uDisk-TCP-Disk at 1Gbit/s is here!

14 Slide: 14 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 14 Time Series of Request-Response Latency Alberta – CERN Round trip time 150 ms 1 Mbyte of data returned Stable for ~150s at 300ms Falls to 160ms with ~80 μs variation Manchester – CERN Round trip time 20 ms 1 Mbyte of data returned Stable for ~18s at ~42.5ms Then alternate points 29 & 42.5 ms

15 Slide: 15 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 15 Using the Trigger DAQ Application

16 Slide: 16 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 16 Time Series of T/DAQ event rate Manchester – CERN Round trip time 20 ms 1 Mbyte of data returned 3 nodes: 1 GEthernet + two 100Mbit 2 nodes: two 100Mbit nodes 1node: one 100Mbit node Event Rate: Use tcpmon transfer time of ~42.5ms Add the time to return the data 95ms Expected rate 10.5/s Observe ~6/s for the gigabit node Reason: TCP buffers could not be set large enough in T/DAQ application

17 Slide: 17 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 17 Tcpdump of the Trigger DAQ Application

18 Slide: 18 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 18 tcpdump of the T/DAQ dataflow at SFI (1) Cern-Manchester 1.0 Mbyte event Remote EFD requests event from SFI Incoming event request Followed by ACK N 1448 byte packets SFI sends event Limited by TCP receive buffer Time 115 ms (~4 ev/s) When TCP ACKs arrive more data is sent. ●●●●●●

19 Slide: 19 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 19 Tcpdump of TCP Slowstart at SFI (2) Cern-Manchester 1.0 Mbyte event Remote EFD requests event from SFI First event request N 1448 byte packets SFI sends event Limited by TCP Slowstart Time 320 ms When ACKs arrive more data sent.

20 Slide: 20 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 20 tcpdump of the T/DAQ dataflow for SFI &SFO Cern-Manchester – another test run 1.0 Mbyte event Remote EFD requests events from SFI Remote EFD sending computation back to SFO Links closed by Application Link setup & TCP slowstart

21 Slide: 21 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 21 Some First Conclusions uThe TCP protocol dynamics strongly influence the behaviour of the Application. uCare is required with the Application design eg use of timeouts. uWith the correct TCP buffer sizes It is not throughput but the round-trip nature of the application protocol that determines performance. Requesting the 1-2Mbytes of data takes 1 or 2 round trips TCP Slowstart (the opening of Cwnd) considerably lengthens time for the first block of data. Implementation “improvements” (Cwnd reduction) kill performance! uWhen the TCP buffer sizes are too small (default) The amount of data sent is limited on each rtt Data is send and arrives in bursts It takes many round trips to send 1 or 2 Mbytes uThe End Hosts themselves CPU power is required for the TCP/IP stack as well and the application Packets can be lost in the IP stack due to lack of processing power

22 Slide: 22 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 22 Summary uWe are investigating the technical feasibility of remote real-time computing for ATLAS. uWe have exercised multiple 1 Gbit/s connections between CERN and Universities located in Canada, Denmark, Poland and the UK Network providers are very helpful and interested in our experiments uDeveloped a set of tests for characterization of the network connections Network behavior generally good – e.g. little packet loss observed Backbones tend to over-provisioned However access links and campus LANs need care. Properly configured end nodes essential for getting good results with real applications. uCollaboration between the experts from the Application and Network teams is progressing well and is required to achieve performance. uAlthough the application is ATLAS-specific, the information presented on the network interactions is applicable to other areas including: Remote iSCSI Remote database accesses Real-time Grid Computing – eg Real-Time Interactive Medical Image processing

23 Slide: 23 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 23 Thanks to all who helped, including: uNational Research Networks Canarie, Dante, DARENET, Netera, PSNC and UKERNA u“ATLAS remote farms” J. Beck Hansen, R. Moore, R. Soluk, G. Fairey, T. Bold, A. Waananen, S. Wheeler, C. Bee u“ATLAS online and dataflow software” S. Kolos, S. Gadomski, A. Negri, A. Kazarov, M. Dobson, M. Caprini, P. Conde, C. Haeberli, M. Wiesmann, E. Pasqualucci, A. Radu

24 Slide: 24 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 24 More Information Some URLs uReal-Time Remote Farm site http://csr.phys.ualberta.ca/real-time uUKLight web site: http://www.uklight.ac.uk uDataTAG project web site: http://www.datatag.org/ uUDPmon / TCPmon kit + writeup: http://www.hep.man.ac.uk/~rich/ (Software & Tools) uMotherboard and NIC Tests: http://www.hep.man.ac.uk/~rich/net/nic/GigEth_tests_Boston.ppt & http://datatag.web.cern.ch/datatag/pfldnet2003/ “Performance of 1 and 10 Gigabit Ethernet Cards with Server Quality Motherboards” FGCS Special issue 2004 http:// www.hep.man.ac.uk/~rich/ (Publications) uTCP tuning information may be found at: http://www.ncne.nlanr.net/documentation/faq/performance.html & http://www.psc.edu/networking/perf_tune.html uTCP stack comparisons: “Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks” Journal of Grid Computing 2004 http:// www.hep.man.ac.uk/~rich/ (Publications) uPFLDnet http://www.ens-lyon.fr/LIP/RESO/pfldnet2005/ uDante PERT http://www.geant2.net/server/show/nav.00d00h002

25 Slide: 25 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 25 Any Questions?

26 Slide: 26 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 26 Backup Slides

27 Slide: 27 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 27 End Hosts & NICs CERN-Manc. Request-response Latency Throughput Packet Loss Re-Order uUse UDP packets to characterise Host & NIC SuperMicro P4DP8 motherboard Dual Xenon 2.2GHz CPU 400 MHz System bus 66 MHz 64 bit PCI bus

28 Slide: 28 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 28 TCP (Reno) – Details uTime for TCP to recover its throughput from 1 lost packet given by: u for rtt of ~200 ms: 2 min UK 6 ms Europe 20 ms USA 150 ms


Download ppt "Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time."

Similar presentations


Ads by Google