T1-NREN Luca dell’Agnello CCR, 21-Ottobre-2005. The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.

Slides:



Advertisements
Similar presentations
Multihoming and Multi-path Routing
Advertisements

FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Abilene Transit Security Policy Joint Techs Summer ’05 Vancouver, BC, CA Steve Cotter Director, Network Services Steve Cotter Director,
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Bandwidth on Demand Dave Wilson DW238-RIPE
Agenda Network Infrastructures LCG Architecture Management
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Connect. Communicate. Collaborate Building and developing international future networks Roberto Sabatino, DANTE HEANET conference, Athlone, 10 November.
EMEA Partners XTM Network Training
T0/T1 network meeting July 19, 2005 CERN
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
AARNet Copyright 2007 AARNet IPv6 Update IPv6 Workshop APAN 24, Xi’An 2007 Bruce Morgan.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Peering Concepts and Definitions Terminology and Related Jargon.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
Routing integrity in a world of Bandwidth on Demand Dave Wilson DW238-RIPE
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
EGEE is a project funded by the European Union under contract IST Network Resources Provision Jean-Paul Gautier SA2 manager Cork meeting,
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v3.2—3-1 Route Selection Using Policy Controls Using Multihomed BGP Networks.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005.
Planning for LCG Emergencies HEPiX, Fall 2005 SLAC, 13 October 2005 David Kelsey CCLRC/RAL, UK
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
David Foster, CERN GDB Meeting April 2008 GDB Meeting April 2008 LHCOPN Status and Plans A lot more detail at:
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Operational model: Roles and functions.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Policy in GÉANT Guy Roberts, Tangui Coulouarn NSI meeting, NORDUnet Conference, Uppsala, 22 Sept 2014.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
Javier Orellana EGEE-JRA4 Coordinator CERN March 2004 EGEE is proposed as a project funded by the European Union under contract IST Network.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Marian Garcia, Operations Manager, DANTE LHC Meeting,
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks ENOC status LHC-OPN meeting – ,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Operating an Optical Private Network: the.
LHCOPN operational handbook Documenting processes & procedures Presented by Guillaume Cessieux (CNRS/IN2P3-CC) on behalf of CERN & EGEE-SA2 LHCOPN meeting,
T0-T1 Networking Meeting 16th June Meeting
Bob Jones EGEE Technical Director
Luca dell’Agnello INFN-CNAF
“A Data Movement Service for the LHC”
INFN CNAF TIER1 Network Service
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Wide Area Network.
LHC Open Network Project status and site involvement
K. Schauerhammer, K. Ullmann (DFN)
The INFN Tier-1 Storage Implementation
* Essential Network Security Book Slides.
an overlay network with added resources
Presentation transcript:

T1-NREN Luca dell’Agnello CCR, 21-Ottobre-2005

The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment computing models LHC Computing Grid (LCG) –Policy Executive Board (PEB) –Grid Deployment Board (GDB) Transfer LHC data from CERN to external computing facilities –Players: TO, T1s, NREN, DANTE

Task forces & activities Group of interest on network set up in 2004 by NRENs –Some T1s, NRENs, CERN (D Foster), DANTE Service Challenge activity (December 2004  service phase) –Progressive tests of data transfer chain in cooperation with experiments –ramp up to LHC data taking phase Second group (LHCOPN wg) started by GDB in January 2005 –T1s, CERN, NRENs, DANTE represented European NRENs, ESNET, CANARIE, ASNET –Chair: D. Foster (CERN) –Meetings: January 2005 (SARA/NIKHEF) 8 April 2005 (SARA/NIKHEF) 19 July 2005 (CERN) Next: November 14 (SC05, Seattle)

LHCOPN wg activity Preparation and control of plans for implementation of WAN connectivity requested from the LHC experiments’ Computing Models Ensure that individual agreements among T1s will provide a coherent infrastructure to satisfy the LHC experiments' Computing Models and that there is a credible solution for the management of the end-to-end network services First priority: to plan the networking for the Tier-0 and Tier-1 centres –should also cover Tier-2 centres, as appropriate information on requirements becomes available The group reports regularly to the GDB Subgroups –IP Addressing and Routing –Monitoring –Operations –Security

LHC “instrument” The LHC and its data collection systems The data processing and storage units at CERN, i.e. T0 The data processing and storage sites called T1 The data processing and storage sites called T2 Associated networking between all T0, T1, and T2 sites. TO-T1s 10 Gbps permanent light paths form the Optical Private Network (OPN) for the LHC instrument LHCOPN

Who does what CERN will –provide interfaces for T1’s link terminations at CERN –host T1's equipment for T0-T1 link termination at CERN (if requested) T1s will –organise physical connectivity from the T1's premises to the T0 –make available network equipment for termination point of T1-T0 link at the T1 side light path termination will be at T1 premises (as in INFN-CNAF case) or at NREN POP –be ready at full bandwidth not later than Q SCs need to test the system (from network to applications) up to full capacity production environment

Network architecture (1) At least one dedicated 10 Gbit/s light path between T0 and each T1 –every T0-T1 link should handle only production LHC data –T1 to T1 traffic via the T0 allowed BUT T1s encouraged to provision direct T1-T1 connectivity –T1-T2 and T0-T2 traffic will normally be handled by the normal L3 connectivity provided by NRENs Backup through L3 paths across NRENs discouraged (potential heavy interference with general purpose Internet connectivity of T0 or the T1s) Main connection Backup connection L3 Backbones Tier0 Tier1s Tier2s main connection backup connection

Network architecture (2) LHC prefixes –Every T1 and the T0 must allocate publicly routable IP address space (the "LHC prefixes”) aggregated into single CIDR block –T1s and T2s to exchange traffic directly with the T0 must provide the T0 with the list of its LHC prefixes (  next slide) –LHC prefixes should be dedicated to the LHC network traffic Routing among T0 and T1s sites will be achieved using eBGP (no static routes!) –No default route must be used in T1-T0 routing –Every T1 will announce its own LHC prefixes to T0 –T0 will accept from a T1 only it own LHC prefixes plus prefixes of any T1 or T2 for which that T1 is willing to provide transit for –TO will announce its LHC prefixes and all the LHC prefixes received in BGP to every peering T1 inter T1s traffic through T0 allowed but not encouraged –T1 will accept T0's prefixes, plus, if desired, some selected T1's prefixes –T0 and T1s should announce their LHC prefixes to their upstream continental research networks (GÉANT2, Abilene, ESnet) in order to allow connectivity towards the T2s Every Tier must make sure that any of its own machines within the LHC prefix ranges can reach any essential service (for instance the DNS system)

Network architecture: the T2s T2s usually upload and download data via a particular T1 It is possibile to provide transit for a T2 to the T0, if a T1 announces the T2's prefixes to T0 and the T0 open all the security barriers for it –BUT this assumes a “static” allocation of a T2 to a particular T1 It is assumed that the T1-T2 and T0-T2 traffic will be handled by the normal L3 connectivity provided by NRENs. The announcement of prefixes associated with Tier 2 sites is deprecated and any site has the right ignore such announcements.

Security on LHCOPN: the basics Security contact person (+ deputy) needed for each site –Note that a T1 has also security contact persons for its own NREN, LCG/EGEE and maybe at HEPix level Incident handling and reporting –Local site procedures for security incident handling will take precedence –Security incident at an OPN site will be reported to the LHCOPN representatives –The report will provide a description of the incident with an assessment of the risk posed to other OPN sites –A site receiving such an incident report, the nominated OPN Security representative will share this information and will abide by the requirements of the local Security officer

Security on LHCOPN: implementation It is not possible to rely on firewalls –throughput and performance problems ACL-based network security acceptable –It is not a general access network –Low number of expected LHC prefixes –Low number of expected (Grid) applications ACL general schema (discussion still on going) –Applied inbound and outbound at T0 and T1s borders –Only traffic originating & directed to LHC prefixes allowed on LHCOPN –IP based ACLs to control traffic from and to end points –Transit T1-T1 through TO for LHC prefixes allowed –Extended ACLs should also be used where source/destination port numbers can be associated with data flows

LHCOPN Operation Issues –Circuits provided and managed by NRENs, GEANT,… –IP termination points at T0/T1s Need for some sort of coordination and monitor of LHCOPN (LHCOPN NCC ?) –Model still under discussion –Proactive IP level monitoring Coordination of multi circuit problems Escalation to circuit provider/NREN –Tiers contact LHCOPN NCC

Italian LHCOPN Architecture PoP Geant2 - IT AT GARR CNAF RT1.BO1 RT.RM1 RT.MI1 CERN eBGP AS513 – AS137 T1 T0 T1 T2 10GE- LAN STM-64 IP access 10GE-LAN lightpath access GFP-F STM-64 10G – leased λ’s PoP Geant2 - CH

LAN connectivity based on 10GE technology with capacity for 10 GE link aggregation Data flows will terminate on disk buffer system (possibly CASTOR, but also other SRM systems under evaluation) 1 network prefix for LHCOPN Security model will be based on L3 filters (ACLs) within the L3 equipment Monitoring via snmp (presently v2) INFN Tier1

GARR T1 INFN-CNAF WAN connectivity BD CISCO 7600 CNAF Juniper GARR 10 Gbps 1 Gbps /20 default Link LHCOPN default T / /16 GEANT

Glossary LHC Network Traffic: The data and control traffic that flows between T0, the T1s, and the T2s. LHC prefixes: IP address space allocated by the T0 and T1s and assigned to the machines connected to the LHC-OPN. Light path: 1.a point to point circuit based on WDM technology 2.a circuit-switched channel between two end points with deterministic behaviour based on TDM technology 3.concatenations of (1) and (2). NREN: usually National Research and Education Network; here used as the collective name for a network that service either the research community or the education community or both.