Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.

Similar presentations


Presentation on theme: "1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network."— Presentation transcript:

1 1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science

2 2 Overview  Logistics  Network Requirements  Sources, Workshop context  Case Study Example  Large Hadron Collider  Today’s Workshop  Structure and Goals

3 3 Logistics Mid-morning break, lunch, afternoon break Self-organization for dinner Agenda on workshop web page – http://workshops.es.net/2008/np-net-req/ http://workshops.es.net/2008/np-net-req/ Round-table introductions

4 4 Network Requirements Requirements are primary drivers for ESnet – science focused Sources of Requirements – Office of Science (SC) Program Managers – Direct gathering through interaction with science users of the network Examples of recent case studies –Climate Modeling –Large Hadron Collider (LHC) –Spallation Neutron Source at ORNL – Observation of the network – Other Sources (e.g. Laboratory CIOs)

5 5 Program Office Network Requirements Workshops Two workshops per year One workshop per program office every 3 years Workshop Goals – Accurately characterize current and future network requirements for Program Office science portfolio – Collect network requirements from scientists and Program Office Workshop structure – Modeled after the 2002 High Performance Network Planning Workshop conducted by the DOE Office of Science – Elicit information from managers, scientists and network users regarding usage patterns, science process, instruments and facilities – codify in “Case Studies” – Synthesize network requirements from the Case Studies

6 6 Large Hadron Collider at CERN

7 7 LHC Requirements – Instruments and Facilities Large Hadron Collider at CERN – Networking requirements of two experiments have been characterized – CMS and Atlas – Petabytes of data per year to be distributed LHC networking and data volume requirements are unique to date – First in a series of DOE science projects with requirements of unprecedented scale – Driving ESnet’s near-term bandwidth and architecture requirements – These requirements are shared by other very-large-scale projects that are coming on line soon (e.g. ITER) Tiered data distribution model – Tier0 center at CERN processes raw data into event data – Tier1 centers receive event data from CERN FNAL is CMS Tier1 center for US BNL is Atlas Tier1 center for US CERN to US Tier1 data rates: 10 Gbps in 2007, 30-40 Gbps by 2010/11 – Tier2 and Tier3 sites receive data from Tier1 centers Tier2 and Tier3 sites are end user analysis facilities Analysis results are sent back to Tier1 and Tier0 centers Tier2 and Tier3 sites are largely universities in US and Europe

8 8 LHC Requirements – Process of Science Strictly tiered data distribution model is only part of the picture – Some Tier2 scientists will require data not available from their local Tier1 center – This will generate additional traffic outside the strict tiered data distribution tree – CMS Tier2 sites will fetch data from all Tier1 centers in the general case Network reliability is critical for the LHC – Data rates are so large that buffering capacity is limited – If an outage is more than a few hours in duration, the analysis could fall permanently behind Analysis capability is already maximized – little extra headroom CMS/Atlas require DOE federated trust for credentials and federation with LCG Service guarantees will play a key role – Traffic isolation for unfriendly data transport protocols – Bandwidth guarantees for deadline scheduling Several unknowns will require ESnet to be nimble and flexible – Tier1 to Tier1,Tier2 to Tier1, and Tier2 to Tier0 data rates could add significant additional requirements for international bandwidth – Bandwidth will need to be added once requirements are clarified – Drives architectural requirements for scalability, modularity

9 9 LHC Ongoing Requirements Gathering Process ESnet has been an active participant in the LHC network planning and operation – Been an active participant in the LHC network operations working group since its creation – Jointly organized the US CMS Tier2 networking requirements workshop with Internet2 – Participated in the US Atlas Tier2 networking requirements workshop – Participated in US Tier3 networking workshops

10 10 LHC Requirements Identified To Date 10 Gbps “light paths” from FNAL and BNL to CERN – CERN / USLHCnet will provide10 Gbps circuits to Starlight, to 32 AoA, NYC (MAN LAN), and between Starlight and NYC – 10 Gbps each in near term, additional lambdas over time (3-4 lambdas each by 2010) BNL must communicate with TRIUMF in Vancouver – This is an example of Tier1 to Tier1 traffic – 1 Gbps in near term – Circuit is currently up and running Additional bandwidth requirements between US Tier1s and European Tier2s – Served by USLHCnet circuit between New York and Amsterdam Reliability – 99.95%+ uptime (small number of hours per year) – Secondary backup paths – Tertiary backup paths – virtual circuits through ESnet, Internet2, and GEANT production networks and possibly GLIF (Global Lambda Integrated Facility) for transatlantic links Tier2 site connectivity – 1 to 10 Gbps required – Many large Tier2 sites require direct connections to the Tier1 sites – this drives bandwidth and Virtual Circuit deployment (e.g. UCSD) Ability to add bandwidth as additional requirements are clarified

11 11 Identified US Tier2 Sites Atlas (BNL Clients) – Boston University – Harvard University – Indiana University Bloomington – Langston University – University of Chicago – University of New Mexico Alb. – University of Oklahoma Norman – University of Texas at Arlington Calibration site – University of Michigan CMS (FNAL Clients) – Caltech – MIT – Purdue University – University of California San Diego – University of Florida at Gainesville – University of Nebraska at Lincoln – University of Wisconsin at Madison

12 12 LHC ATLAS Bandwidth Matrix as of April 2007 Site ASite ZESnet AESnet ZA-Z 2007 Bandwidth A-Z 2010 Bandwidth CERNBNLAofA (NYC)BNL10Gbps20-40Gbps BNLU. of Michigan (Calibration) BNL (LIMAN)Starlight (CHIMAN) 3Gbps10Gbps BNLBoston University BNL (LIMAN) Internet2 / NLR Peerings 3Gbps (Northeastern Tier2 Center) 10Gbps (Northeastern Tier2 Center) BNLHarvard University BNLIndiana U. at Bloomington BNL (LIMAN) Internet2 / NLR Peerings 3Gbps (Midwestern Tier2 Center) 10Gbps (Midwestern Tier2 Center) BNLU. of Chicago BNLLangston University BNL (LIMAN)Internet2 / NLR Peerings 3Gbps (Southwestern Tier2 Center) 10Gbps (Southwestern Tier2 Center) BNLU. Oklahoma Norman BNLU. of Texas Arlington BNLTier3 AggregateBNL (LIMAN)Internet2 / NLR Peerings 5Gbps20Gbps BNLTRIUMF (Canadian ATLAS Tier1) BNL (LIMAN)Seattle1Gbps5Gbps

13 13 LHC CMS Bandwidth Matrix as of April 2007 Site ASite ZESnet AESnet ZA-Z 2007 Bandwidth A-Z 2010 Bandwidth CERNFNALStarlight (CHIMAN) FNAL (CHIMAN) 10Gbps20-40Gbps FNALU. of Michigan (Calibration) FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALCaltechFNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALMITFNAL (CHIMAN) AofA (NYC)/ Boston 3Gbps10Gbps FNALPurdue UniversityFNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALU. of California at San Diego FNAL (CHIMAN) San Diego3Gbps10Gbps FNALU. of Florida at Gainesville FNAL (CHIMAN) SOX3Gbps10Gbps FNALU. of Nebraska at Lincoln FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALU. of Wisconsin at Madison FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALTier3 AggregateFNAL (CHIMAN) Internet2 / NLR Peerings 5Gbps20Gbps

14 14 Estimated Aggregate Link Loadings, 2007-08 Denver Seattle Sunnyvale LA San Diego Chicago Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Existing site supplied circuits ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections Raleigh OC48 (1) (1(3)) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site 13 12.5 8.5 9 13 2.5 Committed bandwidth, Gb/s 6 9 6 2.5 unlabeled links are 10 Gb/s

15 15 Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet4 2007-8 Estimated Bandwidth Commitments Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville All circuits are 10Gb/s. MAX West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC JLab ELITE ODU MATP Wash., DC OC48 (1(3)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (3) (21) (27) (14) (24) (15) (0) (1) (30) FNAL 600 W. Chicago Starlight ANL USLHCNet CERN 10 29 (total) 2.5 Committed bandwidth, Gb/s BNL 32 AoA, NYC USLHCNet CERN 10 13 5 unlabeled links are 10 Gb/s

16 16 Estimated Aggregate Link Loadings, 2010-11 Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1 ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections 50 4040 4040 4040 4040 4040 4 5 3030 4040 4040 40 (16) 3030 3030 50 45 30 15 20 5 5 5 10 2.5 Committed bandwidth, Gb/s link capacity, Gb/s 4040 unlabeled links are 10 Gb/s labeled links are in Gb/s

17 17 ESnet4 2010-11 Estimated Bandwidth Commitments Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1 ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 (0) (1) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) 5 4 4 4 4 4 4 5 5 5 3 4 4 5 5 5 5 5 4 (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3 3 (3) (21) 25 20 25 15 10 20 5 10 5 5 80 FNAL 600 W. Chicago Starlight ANL USLHCNet CERN 40 80 100 BNL 32 AoA, NYC USLHCNet CERN 65 40 unlabeled links are 10 Gb/s 2.5 Committed bandwidth, Gb/s

18 18 2008 NP Workshop Goals – Accurately characterize the current and future network requirements for the NP Program Office’s science portfolio – Codify the requirements in a document The document will contain the case studies and summary matrices Structure – Discussion of ESnet4 architecture and deployment – NP Science portfolio – I2 Perspective – Round table discussions of case study documents Ensure that networking folks understand the science process, instruments and facilities, collaborations, etc. outlined in case studies Provide opportunity for discussions of synergy, common strategies, etc Interactive discussion rather than formal PowerPoint presentations – Collaboration services discussion – Wednesday morning

19 19 Questions? Thanks!


Download ppt "1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network."

Similar presentations


Ads by Google