1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
All rights reserved © 2000, Alcatel 1 CPE-based VPNs Hans De Neve Alcatel Network Strategy Group.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
Security in VoIP Networks Juan C Pelaez Florida Atlantic University Security in VoIP Networks Juan C Pelaez Florida Atlantic University.
Authenticated QoS Signaling William A. (Andy) Adamson Olga Kornievskaia CITI, University of Michigan.
High Performance Computing Course Notes Grid Computing.
The Energy Sciences Network BESAC August 2004
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Network Security Topologies Chapter 11. Learning Objectives Explain network perimeter’s importance to an organization’s security policies Identify place.
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
1 Visual Collaboration Jose Leary, Media Specialist.
Lesson 18-Internet Architecture. Overview Internet services. Develop a communications architecture. Design a demilitarized zone. Understand network address.
Computing and Data Infrastructure for Large-Scale Science Deploying Production Grids: NASA’s IPG and DOE’s Science Grid William E. Johnston
Data Centers and IP PBXs LAN Structures Private Clouds IP PBX Architecture IP PBX Hosting.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
1 The Intersection of Grids and Networks: Where the Rubber Hits the Road William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
ACM 511 Chapter 2. Communication Communicating the Messages The best approach is to divide the data into smaller, more manageable pieces to send over.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
1 UHG MPLS Experience June 14, 2005 Sorell Slaymaker Director Network Architecture & Technologies
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
The Singapore Advanced Research & Education Network.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
1 ESNet Update ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins,
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
MCSE Guide to Microsoft Exchange Server 2003 Administration Chapter One Introduction to Exchange Server 2003.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Securing Access to Data Using IPsec Josh Jones Cosc352.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
City of Hyattsville City Council IT Briefing October 19, 2015 dataprise.com | #ITinRealLife.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
An Introduction to ESnet and its Services
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
Network Requirements Javier Orellana
The Energy Sciences Network BESAC August 2004
Goals Introduce the Windows Server 2003 family of operating systems
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Presentation transcript:

1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads and the ESnet Team Lawrence Berkeley National Laboratory

2 Esnet Provides High bandwidth backbone and connections for Office of Science Labs and programs High bandwidth peering with the US, European, and Japanese Research and Education networks SecureNet (DOE classified R&D) as an overlay network Science services – Grid and collaboration services User support: ESnet “owns” all network trouble tickets (even from end users) until they are resolved o one stop shopping for user network problems o 7x24 coverage o Both network and science services problems

3 TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet backbone: Optical Ring and Hubs ELP HUB SNV HUB CHI HUB NYC HUB ATL HUB DC HUB peering points MAE-E Starlight Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Connects DOE Facilities and Collaborators hubs SNV HUB Abilene

Organized by Office of Science Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White Workshop Panel Chairs Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett Focused on science requirements that drive Advanced Network Infrastructure Middleware Research Network Research Network Governance Model August 13-15, 2002 ESnet is Driven by the Needs of DOE Science Available at

5 Eight Major DOE Science Areas Analyzed at the August ’02 Workshop Feature Vision for the Future Process of Science Characteristics that Motivate High Speed Nets Requirements Discipline NetworkingMiddleware Climate (near term) Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC) A few data repositories, many distributed computing sites NCAR - 20 TBy NERSC - 40 TBy ORNL - 40 TBy Authenticated data streams for easier site access through firewalls Server side data processing (computing and cache embedded in the net) Information servers for global data catalogues Climate (5 yr) Enable the analysis of model data by all of the collaborating community Add many simulation elements/components as understanding increases 100 TBy / 100 yr generated simulation data, 1-5 PBy / yr (just at NCAR) o Distribute large chunks of data to major users for post- simulation analysis Robust access to large quantities of data Reliable data/file transfer (across system / network failures) Climate (5+ yr) Integrated climate simulation that includes all high-impact factors 5-10 PBy/yr (at NCAR) Add many diverse simulation elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation Virtualized data to reduce storage load Robust networks supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets Quality of service guarantees for distributed, simulations Virtual data catalogues and work planners for reconstituting the data on demand Driven by

6 Evolving Qualitative Requirements for Network Infrastructure In the near term applications need high bandwidth 2-4 yrs requirement is for high bandwidth and QoS. 3-5 yrs requirement is for high bandwidth and QoS and network resident cache and compute elements. 4-7 yrs requirement is for high bandwidth and QoS and network resident cache and compute elements, and robust bandwidth (multiple paths) C S C C S I C S C C S I C S C C S I C&C C S C C S I S C I instrument compute storage cache & compute 1-40 Gb/s, end-to-end 1-3 yrs 2-4 yrs 3-5 yrs 4-7 yrs guaranteed bandwidth paths Gb/s, end-to-end

7 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

Organized by the ESSC Workshop Chair Roy Whitney, JLAB Report Editors Roy Whitney, JLAB Larry Price, ANL Workshop Panel Chairs Wu-chun Feng, LANL William Johnston, LBNL Nagi Rao, ORNL David Schissel, GA Vicky White, FNAL Dean Williams, LLNL Focused on what was needed to achieve the science driven network requirements of the previous workshop Both Workshop reports are available at es.net/#research June 3-5, 2003 New Strategic Directions to Address Needs of DOE Science Available at

9 ESnet Strategic Directions Developing a 5 yr. strategic plan for how to provide the required capabilities identified by the workshops o Between DOE Labs and their major collaborators in the University community we must address -Scalable bandwidth -Reliability -Quality of Service o Must address an appropriate set of Grid and human collaboration supporting middleware services

10 TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet backbone: Optical Ring and Hubs ELP HUB SNV HUB CHI HUB NYC HUB ATL HUB DC HUB peering points MAE-E Starlight Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Connects DOE Facilities and Collaborators hubs SNV HUB Abilene

NY-NAP TWC JGI SNLL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA ARM Allied Signal NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES Nevis Yale 4xLAB-DC Brandeis NERSC NREL LLNL GA DOE-ALB GTN SAN DOE-NNSA SEA HUB SNV HUB ELP HUB ALB HUB ATL HUB DC HUB NYC HUBS CHI HUB Allied Signal ELP ELP HUB SNV INEEL SNV YUCCA MT PANTEX QWES T ATM Fix-W PAIX-W Mae-W LBNL/ CalRen2 SEA HUB MAE-E PAIX-E CHI NAP STARLIGHT Qwest Owned Qwest Contracted Touch America (bankrupt) MCI Contracted/Owned Site Contracted/Owned FTS2000 Contracted/Owned SBC(PacBell) Contracted/Owned SPRINT Contracted/Owned Allied Signal While ESnet Has One Backbone Provider, there are Many Local Loop Providers to Get to the Sites SDSC/CENIC SNV HUB Level3

ESnet Logical Infrastructure Connects the DOE Community With its Collaborators ESnet provides complete access to the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international networks Abilene

13 ESnet Traffic Annual growth in the past five years has increased from 1.7x annually to just over 2.0x annually.

14 ESnet Ingress Traffic = Green ESnet Egress Traffic = Blue Traffic between sites % = of total ingress or egress traffic ESnet Appropriate Use Policy (AUP) All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed) E.g. a commercial site cannot exchange traffic with an international site across ESnet This is effected via routing restrictions Who Generates Traffic, and Where Does it Go? ESnet Inter-Sector Traffic Summary, Jan 2003 Commercial R&E International 21% 17% 9% 14% 10% 4% ESnet ~25% Peering Points DOE collaborator traffic, inc. data 72% 53% DOE is a net supplier of data because DOE facilities are used by Univ. and commercial, as well as by DOE researchers DOE sites

15 ESnet Site Architecture New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) Site gateway router Site LAN ESnet border router DMZ ESnet responsibility Site responsibility Site Hubs (backbone routers and local loop connection points) Backbone (optical fiber ring) Local loop (Hub to local site) The Hubs have lots of connections (42 in all)

16 SecureNet SecureNet connects 10 NNSA (Defense Programs) Labs Essentially a VPN with special encrypters o The NNSA sites exchange encrypted ATM traffic o The data is unclassified when ESnet gets it because it is encrypted before it leaves the NNSA sites with an NSA certified encrypter Runs over the ESnet core backbone as a layer 2 overlay – that is, the SecureNet encrypted ATM is transported over ESnet’s Packet-Over-SONET infrastructure by encapsulating the ATM in MPLS using Juniper CCC

17 SNLL LANL LLNL SNLA DOE-AL ORNL KCP Pantex SNV-HUB ELP-HUB CHI-HUB AOA-HUB DC-HUB ATL-HUB SRS Primary SecureNet Path Backup SecureNet Path SecureNet encapsulates payload encrypted ATM in MPLS using the Juniper Router Circuit Cross Connect (CCC) feature. SecureNet – Mid 2003 GTN

18 SLAC 7206 LBL Sunnyvale Albuquerque Atlanta New York Chicago 6BONE Distributed 6TAP PAIX StarLight 7206 IPv6 is the next generation Internet protocol, and ESnet is working on addressing deployment issues -one big improvement is that while IPv4 has 32 bit – about 4x10 9 – addresses (which we are running short of), IPv6 has 132 bit – about – addresses (which we are not ever likely to run short of) -another big improvement is native support for encryption of data 7206 IPv6-ESnet Backbone El Paso SLACLBNLESnet BNL 18 peers 9peers 7peers 7206 DC TWCAbilene ANLFNAL IPv6 only IPv4/IPv6 IPv4 only

19 Operating Science Mission Critical Infrastructure ESnet is a visible and critical pieces of DOE science infrastructure o if ESnet fails,10s of thousands of DOE and University users know it within minutes if not seconds Requires high reliability and high operational security in the ESnet operational services – the systems that are integral to the operation and management of the network o Secure and redundant mail and Web systems are central to the operation and security of ESnet -trouble tickets are by -engineering communication by -engineering database interface is via Web o Secure network access to Hub equipment o Backup secure telephony access to Hub equipment o 24x7 help desk (joint with NERSC) o 24x7 on-call network engineer

Disaster Recovery and Stability The network operational services must be kept available even if, e.g., the West coast is disabled by a massive earthquake, etc. o ESnet engineers in four locations across the country o Full and partial engineering databases and network operational service replicas in three locations o Telephone modem backup access to all hub equipment All core network hubs are located in commercial telecommunication facilities with high physical security and backup power

LBNL PPPL BNL AMES Engineers Eng Srvr Load Srvr Config Srvr DNS Remote Engineer partial duplicate infrastructure TWC Remote Engineer Disaster Recovery and Stability ESnet backbone operated without interruption through N. Calif. Power blackout of 2000 the 9/11 attacks the Sept., 2003 NE States power blackout ATL HUB SEA HUB ALB HUB NYC HUBS DC HUB ELP HUB CHI HUB SNV HUB SDSC Duplicate Infrastructure (currently deploying full replication of the NOC databases and servers and Science Services databases) Engineers, 24x7 NOC, generator backed power Spectrum (net mgmt system) DNS (name – IP address translation) Eng database Load database Config database Public and private Web (server and archive) PKI cert. repository and revocation lists collaboratory authorization service

Maintaining Science Mission Critical Infrastructure in the Face of Cyberattack A Phased Security Architecture is being implemented to protect the network and the sites The phased response ranges from blocking certain site traffic to a complete isolation of the network which allows the sites to continue communicating among themselves in the face of the most virulent attacks o Separates ESnet core routing functionality from external Internet connections by means of a “peering” router that can have a policy different from the core routers o Provide a rate limited path to the external Internet that will insure site- to-site communication during an external denial of service attack o provide “lifeline” connectivity for downloading of patches, exchange of and viewing web pages (i.e.; , dns, http, https, ssh, etc.) with the external Internet prior to full isolation of the network

23 Phased Response to Cyberattack LBNL ESnet router border router X peering router Lab gateway router ESnet second response – filter traffic from outside of ESnet Lab first response – filter incoming traffic at their ESnet gateway router ESnet third response – shut down the main peering path and provide only a limited bandwidth path for specific “lifeline” services X peering router gateway router border router router attack traffic X ESnet first response – filters to assist a site Sapphire/Slammer worm infection created almost a Gb/s traffic spike on the ESnet backbone until filters were put in place (both into and out of sites) to damp it out.

24 Future Directions – the 5 yr Network Strategy Elements o University connectivity o Scalable and reliable site connectivity o Provisioned circuits for hi-impact science bandwidth o Close collaboration with the network R&D community o Services supporting science (Grid middleware, collaboration services, etc.)

25 5 yr Strategy – Near Term Goal 1 Connectivity between any DOE Lab and any Major University should be as good as ESnet connectivity between DOE Labs and Abilene connectivity between Universities o Partnership with I2/Abilene o Multiple high-speed peering points o Routing tailored to take advantage of this o Latency and bandwidth from DOE Lab to University should be comparable to intra ESnet or intra Abilene o Continuous monitoring infrastructure to verify

26 5 yr Strategy – Near Term Goal 2 Connectivity between ESnet and R&D nets – a critical issue from Roadmap o UltraScienceNet and NLR for starters o Reliable, high bandwidth cross-connects 1)IWire ring between Qwest – ESnet Chicago hub and Starlight –This is also critical for DOE lab connectivity to the DOE funded LHCNet 10 Gb/s link to CERN –Both LHC tier 1 sites in the US – Atlas and CMS – are at DOE Labs 2)ESnet ring between Qwest – ESnet Sunnyvale hub and the Level 3 Sunnyvale hub that houses the West Coast POP for NLR and UltraScienceNet

27 5 yr Strategy – Near-Medium Term Goal Scalable and reliable site connectivity o Fiber / lambda ring based Metropolitan Area Networks o Preliminary engineering study completed for San Francisco Bay Area and Chicago area -Proposal submitted -At least one of these is very likely to be funded this year Hi-impact science bandwidth – provisioned circuits

28 ESnet Future Architecture Migrate site local loops to ring structured Metropolitan Area Network and regional nets in some areas o Goal is local rings, like the backbone, that provide multiple paths Dynamic provisioning of private “circuits” in the MAN and through the backbone to provide “high impact science” connections o This should allow high bandwidth circuits to go around site firewalls to connect specific systems. The circuits are secure and end-to-end, so if the sites trust each other, they should allow direct connections if they have compatible security policies. E.g. HPSS HPSS Partnership with DOE UltraNet, Internet 2 HOPI, and National Lambda Rail

29 ESnet Future Architecture Metropolitan Area Networks core ring site production IP provisioned circuits carried over lambdas Optical channel (λ) management equipment one optical fiber pair DWDM providing point- to-point, unprotected circuits Layer 2 management equipment (e.g. 10 GigEthernet switch) Layer 3 (IP) management equipment (router) provisioned circuits carried as tunnels through the ESnet IP backbone provisioned circuits initially via MPLS paths, eventually via lambda paths

30 Site gateway router site equip. Site gateway router Qwest hub Vendor neutral facility ESnet core ANL FNAL ESnet MAN Architecture - Example CERN (DOE funded link) monitor ESnet management and monitoring – partly to compensate for no site router ESnet managed λ / circuit services tunneled through the IP backbone via MPLS monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 StarLight other international peerings Site LAN Current DMZs are back-hauled to the core router Implemented via 2 VLANs – one in each direction around the ring Ethernet switch DMZ VLANs Management of provisioned circuits

31 Future ESnet Architecture New York (AOA) Atlanta (ATL) Washington El Paso (ELP) Site gateway router Site LAN DMZ Site MAN optical fiber ring ESnet border circuit cross connect ESnet backbone Site gateway router Site LAN DMZ Site MAN optical fiber ring ESnet border circuit cross connect Private “circuit” from one Lab to another Specific host, instrument, etc. common security policy

32 Long-Term ESnet Connectivity Goal Qwest NLR MANs Local loops High-speed cross connects with Internet2/Abilene Japan CERN/Europe Europe Major DOE Office of Science Sites MANs for scalable bandwidth and redundant site access to backbone Connecting MANs with two backbones to ensure against hub failure (for example NLR is shown as the second backbone below)

33 Long-Term ESnet Bandwidth Goal Harvey Newman: “And what about increasing the bandwidth in the backbone?” Answer: technology progress o By 2008 (the next generation ESnet backbone) DWDM technology will be 40 Gb/s per lambda o And the backbone will be multiple lambdas Issues o End-to-End, end-to-end, and end-to-end

34 Science Services Strategy ESnet is in a natural position to be the provider of choice for a number of middleware services that support collaboration, colaboratories, Grids, etc. The characteristics of ESnet that make it a natural middleware provider are that ESnet o is the only computing related organization that serves all of the Office of Science o is trusted and well respected in the OSC community o has the 7x24 infrastructure required to support critical services, and is a long- term stable organization. The characteristics of the services for which ESnet is the natural provider are those that o require long-term persistence of the service or the data associated with the service o require high availability, require a high degree of integrity on the part of the provider o are situated at the root of a hierarchy so that the service scales in the number of people that it serves by adding nodes that are managed by local organizations (so that ESnet does not have a large and constantly growing direct user base).

35 Science Services Strategy DOE Grids CA that provides X.509 identity certificates to support Grid authentication provides an example of this model o the service requires a highly trusted provider, requires a high degree of availability o provides a centralized agent for negoiating trust relationships with, e.g., European CAs o it scales by adding site based or Virtual Organization based Registration Agents that interact directly with the users

36 Science Services: Public Key Infrastructure Public Key Infrastructure supports cross-site, cross- organization, and international trust relationships that permit sharing computing and data resources and other Grid services Digital identity certificates for people, hosts and services – essential core service for Grid middleware o provides formal and verified trust management – an essential service for widely distributed heterogeneous collaboration, e.g. in the International High Energy Physics community o DOE Grids CA Have recently added a second CA with a policy that permits bulk issuing of certificates with central private key mg’mt o Important for secondary issuers -NERSC will auto issue certs when accounts are set up – this constitutes an acceptable identity verification -May also be needed for security domain gateways such as Kerberos – X509 – e.g. KX509

37 Science Services: Public Key Infrastructure Policy Management Authority – negotiates and manages the formal trust instrument (Certificate Policy - CP) o Sets and interprets procedures that are carried out by ESnet o Currently facing an important oversight situation involving potential compromise of user X.509 cert private keys -Boys-from-Brazil style exploit => kbd sniffer on several systems that housed Grid certs -Is there sufficient forensic information to say that the pvt keys were not compromised?? –Is any amount of forensic information sufficient to guarantee this, or should the certs be revoked? –Policy refinement by experience Registration Agents (RAs) validate users against the CP and authorize the CA to issue digital identity certs This service was the basis of the first routine sharing of HEP computing resources between US and Europe

38 Science Services: Public Key Infrastructure The rapidly expanding customer base of this service will soon make it ESnet’s largest collaboration service by customer count

39 Voice, Video, and Data Collaboration Service The other highly successful ESnet Science Service is the audio, video, and data teleconferencing service to support human collaboration Seamless voice, video, and data teleconferencing is important for geographically dispersed collaborators o ESnet currently provides voice conferencing, videoconferencing (H.320/ISDN scheduled, H.323/IP ad-hoc), and data collaboration services to more than a thousand DOE researchers worldwide

40 Voice, Video, and Data Collaboration Service o Heavily used services, averaging around port hours per month for H.320 videoconferences, port hours per month for audio conferences port hours per month for H.323 -approximately 200 port hours per month for data conferencing Web-Based registration and scheduling for all of these services o authorizes users efficiently o lets them schedule meetings Such an automated approach is essential for a scalable service – ESnet staff could never handle all of the reservations manually

41 Science Services Strategy The Roadmap Workshop identified twelve high priority middleware services, and several of these fit the criteria for ESnet support. These include, for example o long-term PKI key and proxy credential management (e.g. an adaptation of the NSF’s MyProxy service) o directory services that virtual organizations (VOs) can use to manage organization membership, member attributes and privileges o perhaps some form of authorization service o in the future, some knowledge management services that have the characteristics of an ESnet service are also likely to be important ESnet is seeking the addition funding necessary to develop, deploy, and support these types of middleware services.

42 Conclusions ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE Focused on the Office of Science Labs ESnet is evolving its architecture and services strategy to need the stated requirements for bandwidth, reliability, QoS, and Grid and collaboration supporting services