9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.

Slides:



Advertisements
Similar presentations
Connect. Communicate. Collaborate GÉANT2 (and 3) CCIRN XiAn 26 August 2007 David West DANTE.
Advertisements

T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Dynamically Provisioned Networks as a Substrate for Science David Foster CERN.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Global Connectivity Joint venture of two workshops Kees Neggers & Dany Vandromme e-IRG Workshop Amsterdam, 13 May 2005.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
Connect. Communicate. Collaborate Eastern GÉANT2 Extension Porta Optica Study Regional Workshop Kiev H. Döbbeling DANTE.
LHCOPN & LHCONE Network View Joe Metzger Network Engineering, ESnet LHC Workshop CERN February 10th, 2014.
1 1 David Foster Head, Communications and Networks CERN LHC Networking LHC Grid Fest October 2008.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Connect. Communicate. Collaborate Building and developing international future networks Roberto Sabatino, DANTE HEANET conference, Athlone, 10 November.
Kees Neggers SURFnet SC2003 Phoenix, 20 November 2003.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE meeting – LBL Berkeley (USA) Status update LHCONE L3VPN 1 st /2 nd June 2015.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
Peering Concepts and Definitions Terminology and Related Jargon.
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Connect communicate collaborate LHCONE moving forward Roberto Sabatino, Mian Usman DANTE LHCONE technical workshop SARA, 1-2 December 2011.
Connect. Communicate. Collaborate Using PerfSONAR tools in a production environment Marian Garcia, Operations Manager, DANTE Joint Tech Workshop, 16 th.
Networks ∙ Services ∙ People Mian Usman LHCOPN/ONE meeting – Amsterdam Status update LHCONE L3VPN 28 th – 29 th Oct 2015.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Network Connections for the Worldwide LHC Computing Grid Tony Cass Leader, Communication Systems Group Information Technology Department 11 th December.
Connect communicate collaborate Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE’09, Barcelona, 21 st -25 th September 2009.
Optical Networks and eVLBI Bill St. Arnaud
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
David Foster, CERN GDB Meeting April 2008 GDB Meeting April 2008 LHCOPN Status and Plans A lot more detail at:
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
NEXPReS Period 3 Overview WP2 EVN-NREN Richard Hughes-Jones, DANTE.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Marian Garcia, Operations Manager, DANTE LHC Meeting,
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
LCG France Network Network 11/2010 Centre de Calcul de l'IN2P3/CNRS.
Connect communicate collaborate Federated POP: a successful real-world collaboration Milosz Przywecki, PSNC TNC2012, Reykjavik, Iceland, Maribel.
T0-T1 Networking Meeting 16th June Meeting
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
Fermilab T1 infrastructure
“A Data Movement Service for the LHC”
GÉANT LHCONE Update Mian Usman Network Architect
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
K. Schauerhammer, K. Ullmann (DFN)
The SURFnet Project Bram Peeters, Manager Network Services
an overlay network with added resources
Wide-Area Networking at SLAC
Network Technology Evolution
Network Technology Evolution
Presentation transcript:

9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader

9 th November 2005David Foster CERN IT-CS 2 WAN Overview, Parties, Stakeholders Status of DANTE/GEANT Transatlantic Status/Plans Contents

9 th November 2005David Foster CERN IT-CS 3 WAN Overview Connectivity: –T0-T1 This has the “real-time” issue of needing to get the data out from CERN as fast as possible. Dedicated infrastructure. –T1-T1 Firm models for T1-T1 data flows yet to emerge. Can share T0-T1 infrastructure but some additional dedicated infrastructure is expected. –T1-T2 Some thoughts that requirements could be high, especially if a 200TB, Tier-2 cache refresh is done frequently. (2 full days at 10Gbit/sec) Mixture of general purpose shared IP networks and dedicated infrastructure. The WAN infrastructure is growing “organically” with a central architecture for T0-T1 being defined by the T0-T1 working group. –No central funding. –Many independent activities. –Creating the opportunity for a large connectivity capacity

9 th November 2005David Foster CERN IT-CS 4 T0-T1 Designed around the concept of an “optical private network”. Dedicated bandwidth (10Gbit to each T1) on top of which an IP network is built. –End-End path involves CERN, GEANT and NREN’s in some cases and dedicated links in others. –Should provide very high, reliable performance. –Provided by different entities in different ways: Individual initiatives (e.g. Renater DF to CERN) Funded programmes (e.g. DOE and USLHCNet) International projects (e.g GEANT)

9 th November 2005David Foster CERN IT-CS 5 T0-T1 Networking Group Progress We have had a number of meetings and agreed on an overall approach with many open questions. –Some results have been published –And we have a wiki site for gathering operational information. As we move more and more towards providing an production infrastructure needed for service challenges in 2006, agreements on operational details are being created. –Meeting in Seattle 14 th november

9 th November 2005David Foster CERN IT-CS 6

9 th November 2005David Foster CERN IT-CS 7

9 th November 2005David Foster CERN IT-CS 8

9 th November 2005David Foster CERN IT-CS 9

9 th November 2005David Foster CERN IT-CS 10

9 th November 2005David Foster CERN IT-CS 11 CERN-US 2005 (Lambda Triangle)

9 th November 2005David Foster CERN IT-CS 12 GLIF MAP

9 th November 2005David Foster CERN IT-CS 13

9 th November 2005David Foster CERN IT-CS 14 Tier-0 - Tier-1 Connectivity Tier1LocationNRENsStatus dedicated link ASCCTaipei, TaiwanASnet, SURFnet2.5Gbit to Netherlight, 1Gbit from Surfnet BNLUpton, NY, USAESnet, LHCnet10Gbit to NY Pop, ESNet providing last-mile to BNL CNAFBologna, ItalyGeant2, GARR10Gbit across Geant-2 Dec 2005 FNALBatavia, ILL, USAESnet, LHCnet10 Gb in production IN2P3Lyon, FranceRenater10Gbit via private dark fiber Dec 2005 GridKaKarlsruhe, GermanyGeant2, DFN10Gbit across Geant-2 Dec 2005 SARAAmsterdam, NLGeant2, SURFnet2x10Gbit, will be replaced by GEANT-2? NorduGridScandinaviaGeant2, Nordunet2006 PICBarcelona, SpainGeant2, RedIris2006 RALDidcot, UKGeant2, Ukerna10Gbit to Netherlight via UKLight. To be replaced by SJ5 and GN2 in 2006 TriumfVancouver, CanadaCanet, LHCnet10Gbit to Netherlight, transit to CERN via Surfnet. Will carry mixed traffic initially

9 th November 2005David Foster CERN IT-CS 15 Issues, Conclusions etc. The GEANT-2 cost model for the additional 10G “pipes” is not clear. It will be a full cost recovery, marginal cost model but the precise figures are unknown. Ballpark figures are reasonable. There are a number of circuits becoming available end 2005/early 2006 as GN2 starts production. Total capacity exceeds the stated requirement of 1.6GBytes/sec but it needs to, to enable “catch up” and provide “headroom”. The role of the GLIF community in providing additional connectivity (e.g. T1-T2) is not clear but is a very interesting resource. There are no major T0-T1 networking technical issues but the models for T1-T2 continue to evolve. A very large aggregate capacity will be available for LHC. The challenge will be to really utilise the networks as they come on-line.