1 ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.

Slides:



Advertisements
Similar presentations
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
Advertisements

1 Energy Sciences Network (ESnet) Futures Chinese American Networking Symposium November 30 – December 2, 2004 Joseph Burrescia, Senior Network Engineer.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
The Energy Sciences Network BESAC August 2004
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 Networking for the Future of Science ESnet Status Update William E. Johnston ESnet Department Head and Senior Scientist This talk.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
1 Cyberinfrastructure and Networks : The Advanced Networks and Services Underpinning the Large-Scale Science of DOE’s Office of Science William E. Johnston.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
1 The Intersection of Grids and Networks: Where the Rubber Hits the Road William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
The Singapore Advanced Research & Education Network.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
1 Status Report on US networks at the Turn of the Century Les Cottrell – SLAC & Stanford U.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
National Collaboratories Program Overview Mary Anne ScottFebruary 7, rd DOE/NSF Meeting on LHC and Global Computing “Infostructure”
1 ESNet Update ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins,
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
IEPM. Warren Matthews (SLAC) Presented at the ESCC Meeting Miami, FL, February 2003.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Abilene Update SC'99 :: Portland :: 17-Nov-99. Outline Goals Architecture Current Status NGI Peering International Peering Multicast.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
An Introduction to ESnet and its Services
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
The Energy Sciences Network BESAC August 2004
Energy Sciences Network Enabling Virtual Science June 9, 2009
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Presentation transcript:

1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

2 ESnet’s Mission  Support the large-scale, collaborative science of DOE’s Office of Science  Provide high reliability networking to support the operational traffic of the DOE Labs Provide network services to other DOE facilities  Provide leading-edge network and Grid services to support collaboration ESnet is a component of the Office of Science infrastructure critical to the success of its research programs (program funded through Office of Advanced Scientific Computing Research / MICS; managed and operated by ESnet staff at LBNL)

ESnet Science Data Network (SDN) core TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NREL ALB HUB LLNL GA DOE-ALB GTN&NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (> 10 G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP core SInet (Japan) Japan – Russia(BINP) CA*net4France GLORIADKreonet2 MRENNetherlands StarTapTANet2 Taiwan (ASCC) Australia CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP HUB CHI HUB ATL HUB DC HUB peering points MAE-E PAIX-PA Equinix, etc. PNWG SEA HUB ESnet Physical Network – mid 2005 High-Speed Interconnection of DOE Facilities and Major Science Collaborators IP core hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN (DOE link) GEANT - Germany, France, Italy, UK, etc NYC HUB Starlight Chi NAP CHI-SL HUB SNV HUB Abilene SNV SDN HUB JGI LBNL SLAC NERSC SND core hubs SNV HUB SDSC HUB MaxGP Equinix

STARLIGHT MAE-E NY-NAP GA LBNL ESnet Logical Network: Peering and Routing Infrastructure ESnet peering points (connections to other networks) NYC HUBS SEA HUB SNV HUB MAE-W FIX-W PAIX-W 16 PEERS CA*net4 CERN France GLORIAD Kreonet2 MREN Netherlands StarTap Taiwan (ASCC) TANet2 Abilene + 6 Universities MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 13 PEERS 2 PEERS LANL TECHnet 2 PEERS 36 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 18 Peers 2 PEERS EQX-ASH 1 PEER 10 PEERS ESnet supports collaboration by providing full Internet access manages the full complement of Global Internet routes (about 150,000 IPv4 from 180 peers) at 40 general/commercial peering points high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full Internet access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 28 PEERS Abilene 6 PEERS 14 PEERS NGIX 2 PEERS

5  Drivers for the Evolution of ESnet The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines o Climate simulation o Spallation Neutron Source facility o Macromolecular Crystallography o High Energy Physics experiments o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics o (Nuclear Physics) Available at The network is essential for: o long term (final stage) data analysis o “control loop” data analysis (influence an experiment in progress) o distributed, multidisciplinary simulation August, 2002 Workshop Organized by Office of Science Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett

6 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

7 ESnet is Currently Transporting About 350 terabytes/mo. TBytes/Month Annual growth in the past five years about 2.0x annually. ESnet Monthly Accepted Traffic Jan., 1990 – Dec. 2004

A Small Number of Science Users Account for a Significant Fraction of all ESnet Traffic TBytes/Month DOE Lab- International R&E Lab-U.S. R&E Lab-Lab Note that this data does not include intra-Lab traffic. ESnet ends at the Lab border routers, so science traffic on the Lab LANs is invisible to ESnet. International Domestic  Top 100 host-host flows = 99 TBy Total ESnet traffic (Dec, 2004) = 330 TBy Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged

9 TBytes/Month Fermilab (US)  WestGrid (CA) Fermilab (US)  IN2P3 (FR) SLAC (US)  INFN CNAF (IT) SLAC (US)  RAL (UK) Fermilab (US)  WestGrid (CA) SLAC (US)  RAL (UK) SLAC (US)  IN2P3 (FR) BNL (US)  IN2P3 (FR) SLAC (US)  IN2P3 (FR) FNAL  Karlsruhe (DE) LIGO  Caltech LLNL  NCAR FNAL  MIT ??  LBNL FNAL  MIT FNAL  SDSCFNAL  Johns Hopkins NERSC  NASA Ames LBNL  U. Wisc. BNL  LLNL NERSC  LBNL

10 ESnet Traffic Since BaBar (SLAC high energy physics experiment) production started, the top 100 ESnet flows have consistently accounted for 30% - 50% of ESnet’s monthly total traffic As LHC (CERN high energy physics accelerator) data starts to move, this will increase a lot ( times) o Both LHC tier 1 (primary U.S. experiment data centers) are at DOE Labs – Fermilab and Brookhaven U.S. tier 2 (experiment data analysis) centers will be at universities – when they start pulling data from the tier 1 centers the traffic distribution will change a lot

11 ESnet Abilene ORNL DEN ELP ALB DC DOE Labs w/ monitors Universities w/ monitors network hubs high-speed cross connects: ESnet ↔ Internet2/Abilene Monitoring DOE Lab ↔ University Connectivity Current monitor infrastructure (red&green) and target infrastructure Uniform distribution around ESnet and around Abilene Japan Europe SDG Japan CHI AsiaPac SEA NYC HOU KC LA ATL IND SNV Initial site monitors SDSC LBNL FNAL NCS BNL OSU ESnet Abilene CERN

12 ESnet Evolution With the current architecture ESnet cannot address o the increasing reliability requirements -Labs and science experiments are insisting on network redundancy o the long-term bandwidth needs -LHC will need dedicated 10/20/30/40 Gb/s into and out of FNAL and BNL -Specific planning drivers include HEP, climate, SNS, ITER and SNAP, et al The current core ring cannot handle the anticipated large science data flows at affordable cost The current point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

13 ESnet Strategy – A New Architecture Goals derived from science needs o Fully redundant connectivity for every site o High-speed access to the core for every site (at least 20 Gb/s) o 100 Gbps national bandwidth by 2008 Three part strategy 1) Metropolitan Area Network (MAN) rings to provide dual site connectivity and much higher site-to-core bandwidth 2) A Science Data Network core for -large, high-speed science data flows -multiply connecting MAN rings for protection against hub failure -a platform for provisioned, guaranteed bandwidth circuits -alternate path for production IP traffic 3) A High-reliability IP core (e.g. the current ESnet core) to address Lab operational requirements

14 Site gateway router site equip. Site gateway router ESnet production IP core Lab ESnet MAN Architecture R&E peerings monitor ESnet management and monitoring ESnet managed λ / circuit services tunneled through the IP backbone monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 International peerings Site LAN ESnet SDN core T x 10 Gbps channels core router switches managing multiple lambdas core router

GEANT (Europe) Asia- Pacific ESnet IP Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) Primary DOE Labs Existing IP core hubs New ESnet Strategy: Science Data Network + IP Core + MANs Possible new hubs Atlanta (ATL) Metropolitan Area Rings CERN Core loops ESnet Science Data Network (2 nd Core) Seattle (SEA) Albuquerque (ALB) New hubs SDN hubs

16 DEN ELP ALB ATL Metropolitan Area Rings Tactics for Meeting Science Requirements – 2007/2008 Aus. CERN Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene ESnet hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC Aus. ESnet IP Core (>10 Gbps ??) ESnet Science Data Network (2 nd Core – Gbps, National Lambda Rail) Metropolitan Area Rings 10 Gbps enterprise IP traffic Gbps circuit based transport

17  ESnet Services Supporting Science Collaboration In addition to the high-bandwidth network connectivity for DOE Labs, ESnet provides several other services critical for collaboration That is ESnet provides several “science services” – services that support the practice of science o Access to collaborators (“peering”) o Federated trust -identity authentication –PKI certificates –crypto tokens o Human collaboration – video, audio, and data conferencing

18 DOEGrids CA Usage Statistics * Report as of Jan 11,2005* FusionGRID CA certificates not included here. User Certificates1386Total No. of Certificates3569 Service Certificates2168Total No. of Requests4776 Host/Other Certificates15Internal PKI SSL Server certificates 36

19 DOEGrids CA Usage - Virtual Organization Breakdown * DOE-NSF collab. *

20 ESnet Collaboration Services: Production Services Web-based registration and audio/data bridge scheduling Ad-Hoc H.323 and H.320 videoconferencing Streaming on the Codian MCU using Quicktime or REAL “Guest” access to the Codian MCU via the worldwide Global Dialing System (GDS) Over 1000 registered users worldwide

21 ESnet Collaboration Services: H.323 Video Conferencing Radvision and Codian o 70 ports on Radvision available at 384 kbps o 40 ports on Codian at 2 Mbps plus streaming o Usage leveled, but, expect increase in early 2005 (new groups joining ESnet Collaboration) o Radvision increase to 200 ports at 384 kbps by mid-2005

22 Conclusions ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE ESnet is working on providing the DOE mission science networking requirements with several new initiatives and a new architecture ESnet is very different today in both planning and business approach and in goals than in the past