Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.

Similar presentations


Presentation on theme: "1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan."— Presentation transcript:

1 1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

2 2 ESnet’s Mission  Support the large-scale, collaborative science of DOE’s Office of Science  Provide high reliability networking to support the operational traffic of the DOE Labs Provide network services to other DOE facilities  Provide leading-edge network and Grid services to support collaboration ESnet is a component of the Office of Science infrastructure critical to the success of its research programs (program funded through Office of Advanced Scientific Computing Research / MICS; managed and operated by ESnet staff at LBNL)

3 ESnet Science Data Network (SDN) core TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NREL ALB HUB LLNL GA DOE-ALB GTN&NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (> 10 G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP core SInet (Japan) Japan – Russia(BINP) CA*net4France GLORIADKreonet2 MRENNetherlands StarTapTANet2 Taiwan (ASCC) Australia CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP HUB CHI HUB ATL HUB DC HUB peering points MAE-E PAIX-PA Equinix, etc. PNWG SEA HUB ESnet Physical Network – mid 2005 High-Speed Interconnection of DOE Facilities and Major Science Collaborators IP core hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN (DOE link) GEANT - Germany, France, Italy, UK, etc NYC HUB Starlight Chi NAP CHI-SL HUB SNV HUB Abilene SNV SDN HUB JGI LBNL SLAC NERSC SND core hubs SNV HUB SDSC HUB MaxGP Equinix

4 STARLIGHT MAE-E NY-NAP GA LBNL ESnet Logical Network: Peering and Routing Infrastructure ESnet peering points (connections to other networks) NYC HUBS SEA HUB SNV HUB MAE-W FIX-W PAIX-W 16 PEERS CA*net4 CERN France GLORIAD Kreonet2 MREN Netherlands StarTap Taiwan (ASCC) TANet2 Abilene + 6 Universities MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 13 PEERS 2 PEERS LANL TECHnet 2 PEERS 36 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 18 Peers 2 PEERS EQX-ASH 1 PEER 10 PEERS ESnet supports collaboration by providing full Internet access manages the full complement of Global Internet routes (about 150,000 IPv4 from 180 peers) at 40 general/commercial peering points high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full Internet access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 28 PEERS Abilene 6 PEERS 14 PEERS NGIX 2 PEERS

5 5  Drivers for the Evolution of ESnet The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines o Climate simulation o Spallation Neutron Source facility o Macromolecular Crystallography o High Energy Physics experiments o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics o (Nuclear Physics) Available at www.es.net/#research The network is essential for: o long term (final stage) data analysis o “control loop” data analysis (influence an experiment in progress) o distributed, multidisciplinary simulation August, 2002 Workshop Organized by Office of Science Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett

6 6 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s160-200 Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) 0.198 Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation 0.091 Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

7 7 ESnet is Currently Transporting About 350 terabytes/mo. TBytes/Month Annual growth in the past five years about 2.0x annually. ESnet Monthly Accepted Traffic Jan., 1990 – Dec. 2004

8 A Small Number of Science Users Account for a Significant Fraction of all ESnet Traffic TBytes/Month DOE Lab- International R&E Lab-U.S. R&E Lab-Lab Note that this data does not include intra-Lab traffic. ESnet ends at the Lab border routers, so science traffic on the Lab LANs is invisible to ESnet. International Domestic  Top 100 host-host flows = 99 TBy Total ESnet traffic (Dec, 2004) = 330 TBy Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged

9 9 TBytes/Month Fermilab (US)  WestGrid (CA) Fermilab (US)  IN2P3 (FR) SLAC (US)  INFN CNAF (IT) SLAC (US)  RAL (UK) Fermilab (US)  WestGrid (CA) SLAC (US)  RAL (UK) SLAC (US)  IN2P3 (FR) BNL (US)  IN2P3 (FR) SLAC (US)  IN2P3 (FR) FNAL  Karlsruhe (DE) LIGO  Caltech LLNL  NCAR FNAL  MIT ??  LBNL FNAL  MIT FNAL  SDSCFNAL  Johns Hopkins NERSC  NASA Ames LBNL  U. Wisc. BNL  LLNL NERSC  LBNL

10 10 ESnet Traffic Since BaBar (SLAC high energy physics experiment) production started, the top 100 ESnet flows have consistently accounted for 30% - 50% of ESnet’s monthly total traffic As LHC (CERN high energy physics accelerator) data starts to move, this will increase a lot (200-2000 times) o Both LHC tier 1 (primary U.S. experiment data centers) are at DOE Labs – Fermilab and Brookhaven U.S. tier 2 (experiment data analysis) centers will be at universities – when they start pulling data from the tier 1 centers the traffic distribution will change a lot

11 11 ESnet Abilene ORNL DEN ELP ALB DC DOE Labs w/ monitors Universities w/ monitors network hubs high-speed cross connects: ESnet ↔ Internet2/Abilene Monitoring DOE Lab ↔ University Connectivity Current monitor infrastructure (red&green) and target infrastructure Uniform distribution around ESnet and around Abilene Japan Europe SDG Japan CHI AsiaPac SEA NYC HOU KC LA ATL IND SNV Initial site monitors SDSC LBNL FNAL NCS BNL OSU ESnet Abilene CERN

12 12 ESnet Evolution With the current architecture ESnet cannot address o the increasing reliability requirements -Labs and science experiments are insisting on network redundancy o the long-term bandwidth needs -LHC will need dedicated 10/20/30/40 Gb/s into and out of FNAL and BNL -Specific planning drivers include HEP, climate, SNS, ITER and SNAP, et al The current core ring cannot handle the anticipated large science data flows at affordable cost The current point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

13 13 ESnet Strategy – A New Architecture Goals derived from science needs o Fully redundant connectivity for every site o High-speed access to the core for every site (at least 20 Gb/s) o 100 Gbps national bandwidth by 2008 Three part strategy 1) Metropolitan Area Network (MAN) rings to provide dual site connectivity and much higher site-to-core bandwidth 2) A Science Data Network core for -large, high-speed science data flows -multiply connecting MAN rings for protection against hub failure -a platform for provisioned, guaranteed bandwidth circuits -alternate path for production IP traffic 3) A High-reliability IP core (e.g. the current ESnet core) to address Lab operational requirements

14 14 Site gateway router site equip. Site gateway router ESnet production IP core Lab ESnet MAN Architecture R&E peerings monitor ESnet management and monitoring ESnet managed λ / circuit services tunneled through the IP backbone monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 International peerings Site LAN ESnet SDN core T320 2-4 x 10 Gbps channels core router switches managing multiple lambdas core router

15 GEANT (Europe) Asia- Pacific ESnet IP Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) Primary DOE Labs Existing IP core hubs New ESnet Strategy: Science Data Network + IP Core + MANs Possible new hubs Atlanta (ATL) Metropolitan Area Rings CERN Core loops ESnet Science Data Network (2 nd Core) Seattle (SEA) Albuquerque (ALB) New hubs SDN hubs

16 16 DEN ELP ALB ATL Metropolitan Area Rings Tactics for Meeting Science Requirements – 2007/2008 Aus. CERN Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene ESnet hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC Aus. ESnet IP Core (>10 Gbps ??) ESnet Science Data Network (2 nd Core – 30-50 Gbps, National Lambda Rail) Metropolitan Area Rings 10 Gbps enterprise IP traffic 40-60 Gbps circuit based transport

17 17  ESnet Services Supporting Science Collaboration In addition to the high-bandwidth network connectivity for DOE Labs, ESnet provides several other services critical for collaboration That is ESnet provides several “science services” – services that support the practice of science o Access to collaborators (“peering”) o Federated trust -identity authentication –PKI certificates –crypto tokens o Human collaboration – video, audio, and data conferencing

18 18 DOEGrids CA Usage Statistics * Report as of Jan 11,2005* FusionGRID CA certificates not included here. User Certificates1386Total No. of Certificates3569 Service Certificates2168Total No. of Requests4776 Host/Other Certificates15Internal PKI SSL Server certificates 36

19 19 DOEGrids CA Usage - Virtual Organization Breakdown * DOE-NSF collab. *

20 20 ESnet Collaboration Services: Production Services Web-based registration and audio/data bridge scheduling Ad-Hoc H.323 and H.320 videoconferencing Streaming on the Codian MCU using Quicktime or REAL “Guest” access to the Codian MCU via the worldwide Global Dialing System (GDS) Over 1000 registered users worldwide

21 21 ESnet Collaboration Services: H.323 Video Conferencing Radvision and Codian o 70 ports on Radvision available at 384 kbps o 40 ports on Codian at 2 Mbps plus streaming o Usage leveled, but, expect increase in early 2005 (new groups joining ESnet Collaboration) o Radvision increase to 200 ports at 384 kbps by mid-2005

22 22 Conclusions ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE ESnet is working on providing the DOE mission science networking requirements with several new initiatives and a new architecture ESnet is very different today in both planning and business approach and in goals than in the past


Download ppt "1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan."

Similar presentations


Ads by Google