LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.

Slides:



Advertisements
Similar presentations
TransLight/StarLight Tom DeFanti (Principal Investigator) Maxine Brown (Co-Principal Investigator) Joe Mambretti (Investigator) Alan Verlo, Linda Winkler.
Advertisements

StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
Abilene and Internet2 Engineering Update Guy Almes Terena Networking Conference 2002 Limerick, Ireland Guy Almes Terena Networking Conference 2002 Limerick,
Connect. Communicate. Collaborate GEANT2 update. Connect. Communicate. Collaborate GÉANT2 Topology.
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
International Task Force Meeting March 7, a.m. to noon Washington, DC.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
ASGC T1 report HSIN-YEN CHEN APAN38 NanTou 13 Aug
Latest Gigabit / Lambda Networking - Asia - Pacific and World Kilnam Chon.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE meeting – LBL Berkeley (USA) Status update LHCONE L3VPN 1 st /2 nd June 2015.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
I nternational C onnectivity U pdated S tatus for T aiwan 2004 / 01 / 28 Fay Sheu APAN, Honolulu Hsinchu Science Park, Taiwan.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
March 20071ASNet – Academic Services Network The Design of ASnet and the support to Grid Yu-lin Academia Sinica International Symposium on Grid.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC Engineering Chris Robb Indiana University
SINET Update and Collaboration with TEIN2 Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
ASGC Activities Update Hsin-Yen Chen ASGC LHCONE/LHCOPN meeting Taipei 13 Mar
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
T0-T1 Networking Meeting 16th June Meeting
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
“A Data Movement Service for the LHC”
GÉANT LHCONE Update Mian Usman Network Architect
2nd Asia Tier Centre Forum Summary report 4th April 2017 edoardo
Maxine Brown, Tom DeFanti, Joe Mambretti
Networking for the Future of Science
ASCC Site Report Simon C. Lin Computing Centre, Academia Sinica
The LHCONE network and L3VPN status update
APAN Lambda Networking
ESnet Network Engineer Lawrence Berkeley National Laboratory
Hsin-Yen Chen LHCONE/LHCOPN 16 Oct 2017 KEK, Japan
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
LHCONE In Europe Mian Usman DANTE.
CERN-USA connectivity update DataTAG project
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Presentation transcript:

LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam on 8 th April 2005

European Tier0/Tier1/Tier2 Connectivity Overview A total of 7 NRENs serving Tier1 sites in Europe plus one in Asia Pacific All currently have at least 1Gbps IP connections to CERN 10Gbps lambda available on timescales ranging from July 2005 to autumn 2006 Number, location and bandwidth requirements of Tier2 sites unclear to many NRENs

GÉANT2 Project Partners

RedIRIS (Spain) Connecting the PIC Tier 1 site in Barcelona Traffic crosses 3 domains prior to reaching GÉANT2: –PIC network –Anella Cientifica (Catalan regional network) –RedIRIS Currently 1Gbps VPN is supported Upgrade planned for RedIRIS connection to Catalan network, date TBD No request has yet been received from PIC for 10G lambda PIC requirement timeline unclear 7 Tier 2 sites are known in Spain Bandwidth requirement of Tier 2 sites unknown Tier 2 sites connectivity varies from GE to STM-4 Cost sharing TBD with Spanish ministry

DFN will connect the Tier 1 site at Karlsruhe to CERN via GÉANT2 Presently 10G is available over GÉANT (Layer 3), providing LSP Karlsruhe-to-GÉANT-to-CERN Testing is already taking place and high-datarate transfers have been shown Karlsruhe to CERN Tier2 centres are not yet known so provision is unclear Cost sharing: Karlsruhe will pay a subscription to DFN, a proportion of which will be passed to GÉANT2 DFN (Germany)

GARR (Italy) GARR will connect the Bologna Tier1 site to CERN via GÉANT2 10Gbps lambda ring provided by GARR, connecting INFN-CNAF (Tier 1) and GÉANT2 PoP in Milan will be operational by September 2005 By the end of 2005, multiple lambdas will be available from this site to GÉANT2, allowing as many 10Gbps connections as required GARR connects Bologna Tier1 to other Tier1s via GÉANT2 12 Italian Tier 2 sites identified, all with DF to GARR backbone 8 Tier 2 sites already have 1Gbps connection. All will have 1Gbps connectivity by September 2005 GARR will bill INFN for all services provided, details of the cost sharing TBD

UKERNA (UK) UKERNA will connect the RAL Tier1 site to CERN via GÉANT2 2x1Gbps RAL-CERN via UKLight possible now 10G lambda RAL-UKLight (switched port)-GÉANT2 by end 2006 Cost will be addressed by UK national funding (discussions ongoing) -a proportion being channelled to GÉANT2 Four distributed Tier 2 sites: NorthGrid, SouthGrid, ScotGrid, LondonGrid: bandwidth requirements unknown

SURFnet (Netherlands) SURFnet will connect the Tier1 site at SARA, Amsterdam SURFnet6 will provide a 10G lambda to SARA by July 2005 Initially 10G Lambda to CERN will be provided by SURFnet, later by GÉANT2 when available Tier2 sites in the Netherlands will be connected via 10G lambdas by January G lightpaths will be provided over NetherLight and/or GÉANT from Dutch Tier2s to non-Dutch Tier1s SURFnet will absorb networking costs of the NL-access to CERN via GÉANT2 and all costs inside NL for accessing the Tier1 and Tier2s

RENATER (France) RENATER will connect the IN2P3 (Lyon) Tier1 site directly to CERN (not via GÉANT2) RENATER will procure dark fibre between Paris, Lyon and CERN 10G lightpath will be provided Lyon-CERN by July 2005 Tier1-Tier1 traffic TBD Traffic to/from the 3 French Tier 2 sites will pass over the RENATER network Cost sharing TBD

NORDUnet (Nordic Countries) NORDUnet will connect the ‘distributed’ Tier1 site in the Nordic countries Connectivity via lambdas can be provided by mid-2006 for all the sites concerned Cost sharing TBD

SWITCH (Switzerland) The Tier1 site at CERN is connected directly to Tier0 There are no Tier1 sites connected by SWITCH The Tier2 site, CSCS has10GE if necessary CSCS will connect directly to CERN (ie not via GÉANT2) Cost of this connection will be borne by SWITCH

European Tier1 SUMMARY High data rate transfer tests Karlsruhe to CERN already underway 10G available now (L3) from Bologna to CERN Testing of 10G lambdas from Lyon and Amsterdam can commence from July 2005 Amsterdam (and Taipei) will use Netherlight link to CERN until GÉANT2 paths are available Testing of Barcelona link at 10G from October 2005 Nordic distributed facility restricted to 1G until late-2006 when 10G available RAL could operate between 2 and 4 x 1GE (subject to scheduling and NetherLight/CERN agreement) until late-2006 when 10G available. Interconnection of UKLight to GEANT2 might make an earlier transition to higher capacities possible.

SC test Taipei

Taipei Status ASNet* runs one STM16 IPLC from Taipei to Amsterdam One GE Local loop from Amsterdam to Geneva (via NetherLight LightPath Service), second GE is waiting to turn on (I already signed the xconnect order form) will double the IPLC on July 1st (contract will be signed on May 27th) ASCC LCG facilities now have multiple GE uplinks to ASNet’s core router; will be replaced by one or two 10GE during this summer vacation ASNet connects (10GE*2 + STM64*n) to domestic backbone, a.k.a. TANet/TWAREN joint backbone, to reach T2’s in Taiwan. Every T2 in Taiwan has it’s own 10GE link to the domestic backbone * ASNet (Academic Services Network, as#9264) is the network division and is also the network name that registered in APNIC.

Asia Pacific Link Status How many T2s in Asia Pacific region? Taipei – Tokyo –STM-4 IPLC –2*GE to APAN-JP (for JP Universities) –Will have GE to SiNet/SuperSiNet (for KEK) –IPLC should be increased before LCG production run. Taipei – Singapore –STM-1 IPLC to SingAREN (National Grid Office of Singapore)

Asia Pacific Link Status Taipei – Hong Kong –STM-4 IPLC, will upgrade to STM-16 on Feb 1st, 2006 (contract already been signed on March 30th) –GE link to CERnet, GE link to CSTnet CERnet : China Education and Research network CSTnet : China Science and Technology network –Maybe have GE link with KOREN (still work with KOREN engineer)

ASNet (Taiwan) Summary ASNet connect the Taipei Tier1 site Currently Taipei is connected via an STM16 link (IPLC) to Amsterdam, this will double in July 2005 Currently traffic is routed over GE local loop and via NetherLight to CERN Quality is assured by MPLS and PIP service A move to a GÉANT2 connection Amsterdam-CERN is planned A WDM solution is being considered, no decision as yet Tier2 sites are connected in Taiwan (by 10GE to ASNet), Japan and China

ESnet Science Data Network (SDN) core TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NREL LLNL GA DOE-ALB GTN&NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP core SINet (Japan) Japan – Russia (BINP) CA*net4France GLORIADKreonet2 MRENNetherlands StarTapTANet2 Taiwan (ASCC) Australia CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP HUB ATL HUB DC HUB peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP SEA HUB ESnet High-Speed Physical Connectivity to DOE Facilities and Collaborators, Summer 2005 IP core hubs SNV HUB Abilene high-speed peering points Abilene CERN (LHCnet – part DOE funded) GEANT - Germany, France, Italy, UK, etc NYC HUB Starlight Chi NAP CHI-SL HUB SNV HUB Abilene SNV SDN HUB JGI LBNL SLAC NERSC SND core hubs SDSC HUB Equinix MAN LAN Abilene MAXGPoP SoXGPoP SNV SDN HUB ALB HUB ORNL CHI HUB

ESnet The IP core is primarily a layer 3 infrastructure –However, supports layer 2 via MPLS –Directly connects sites –Provides global peering for sites The SDN core is primarily a layer 2 infrastructure –Targeted at providing virtual circuit services

GEANT (Europe) Asia- Pacific New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) Primary DOE Labs Existing IP core hubs ESnet Target Architecture: IP Core + Science Data Network + MANs Possible new hubs Atlanta (ATL) CERN Core loops Seattle (SEA) Albuquerque (ALB) New hubs SDN hubs Aus. Production IP core Science Data Network core Metropolitan Area Networks Lab supplied International connections ESnet Science Data Network (2nd Core) ESnet IP Core Metropolitan Area Rings

FNAL ESnet site gateway router site equip. Qwest hub (NBC Bld.) Starlight ESnet IP core FNAL Shared w/ IWire ESnet ESnet Near-Term Planning for FNAL FNAL CERN ESnet Switch/RTR ESnet/ Qwest ORNL OC192 ESnet fiber T320 NRL, UltraScienceNet, etc. T320 ESnet SDN core All circuits are 10Gb/s Notes Qwest IWire FNAL

ESnet Planning for BNL (Long Island MAN Ring) Engineering Study for LI MAN Other connections

DEN ELP ALB ATL Metropolitan Area Rings ESnet Goal – 2007/2008 Aus. CERN Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene New ESnet hubs ESnet hubs SNV Europe Japan CHI Science Data Network core Lab supplied Major international Production IP ESnet core DC Japan NYC Aus. Metropolitan Area Rings 10 Gbps enterprise IP traffic Gbps circuit based transport ESnet Science Data Network (2nd Core – Gbps, National Lambda Rail) ESnet IP Core (≥10 Gbps) 10Gb/s 30Gb/s 40Gb/s