Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester.

Similar presentations


Presentation on theme: "1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester."— Presentation transcript:

1 1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester

2 2 eVLBI Development at JBO and Manchester: eVLBI correlation tests using actual astronomy data; both pre-recorded and real time data (see talk by Arpad) Network research: –Why? –How? –Results?

3 3 Why should a radio astronomer be interested in network research? Optical fibres have huge bandwidth capability: eMERLIN, eVLA, ALMA, SKA will use >>GHz bandwidths: we need increased bandwidth for VLBI Fibre networks are (were) under utilized – can VLBI use spare capacity? So why study networks? What are the bandwidth limits? How reliable are the links? What’s the best protocol? Interaction with end hosts? What’s happening as technology changes? Can we get more throughput using switched light paths?

4 4 How? Network Tests: Manchester/JBO to Elsewhere High Energy physics (LHC data) and VLBI have the same aims for internet data usage – collaboration! iGRID 2002 Manchester-Amsterdam-JIVE, showed that >500 Mpbs flows are possible UDP tests on production network in 2003/4 ESLEA Project 2005- use of UKLight GEANT2 Launch 2005 RESULTS ------------------ 

5 5 Westerbork Netherlands Dedicate d Gbit link EVN-NREN Onsala Sweden Gbit link Jodrell Bank UK Dwingeloo DWDM link Cambridge UK MERLIN Medicina Italy Chalmers University of Technolog y, Gothenbur g Torun Poland Gbit link

6 6 Throughput vs packet spacing Manchester: 2.0G Hz Xeon Dwingeloo: 1.2 GHz PIII Near wire rate, 950 Mbps Tests done at different times Packet loss CPU Kernel Load sender CPU Kernel Load receiver 4 th Year project Adam Mathews Steve O’Toole UDP Throughput Manchester-Dwingeloo (Nov 2003)

7 7 Packet loss distribution: Cumulative distribution Cumulative distribution of packet loss, each bin is 12  sec wide Long range effects in the data? Poisson

8 8 Exploitation of Switched Lightpaths for E Science Applications: Multi disciplinary project involve collaboration between many research groups: network scientists, computer science, medical science, high energy physicists and radio astronomers: using UKLight network Protocol and control plane development High performance computing eHealth (e.g. radiology) High Energy Physics data transfer (LHC) eVLBI: funds a post-doc (ad out – apply now!)

9 9 26 th January 2005 UDP Tests Simon Casey (PhD project) Between JBO and JIVE in Dwingeloo, using production network Period of high packet loss (3%):

10 10 The GÉANT2 Launch June 2005

11 11 Jodrell Bank UK Dwingeloo DWDM link Medicina Italy Torun Poland e-VLBI at the GÉANT2 Launch Jun 2005

12 12 e-VLBI UDP Data Streams

13 13 UDP Performance: 3 Flows on GÉANT Throughput: 5 Hour run 1500 byte MTU Jodrell: JIVE 2.0 GHz dual Xeon – 2.4 GHz dual Xeon 670-840 Mbit/s Medicina (Bologna): JIVE 800 MHz PIII – Mk5 (623) 1.2 GHz PIII 330 Mbit/s limited by sending PC Torun: JIVE 2.4 GHz dual Xeon – Mk5 (575) 1.2 GHz PIII 245-325 Mbit/s limited by security policing (>400Mbit/s  20 Mbit/s) ? Throughput: 50 min period  Period is ~17 min

14 14 UDP Performance: 3 Flows on GÉANT Packet Loss & Re-ordering Each point 10 secs, 660k packets Jodrell: 2.0 GHz Xeon –Loss 0 – 12% –Reordering significant Medicina: 800 MHz PIII –Loss ~6% –Reordering in-significant Torun: 2.4 GHz Xeon –Loss 6 - 12% –Reordering in-significant

15 15 18 Hour Flows on UKLight Jodrell – JIVE, 26 June 2005 Throughput: Jodrell: JIVE 2.4 GHz dual Xeon – 2.4 GHz dual Xeon 960-980 Mbit/s Traffic through SURFnet Packet Loss –Only 3 groups with 10-150 lost packets each –No packets lost the rest of the time Packet re-ordering –None

16 16 Conclusion Max data rates depends on the path: –Limited by end hosts? : lack of cpu power in end host jumbo packets will help here –Local limits e.g. security : work with the network providers to achieve the bandwidth we need –Networks have the capacity for >500 Mbps flows –Evidence for network bottlenecks somewhere : more evidence being collected Packet loss will limit TCP flows – explains limits to data rates in EVN eVLBI tests: new protocols will help here More needs to be done before we can reliably get 512 Mbps eVLBI in EVN – especially study of end hosts.

17 17 Any Questions?


Download ppt "1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester."

Similar presentations


Ads by Google