1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Deployment of MPLS VPN in Large ISP Networks
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
Arkansas Research and Education Optical Network Arkansas Association of Public Universities Little Rock, Arkansas April 10, 2008 Dr. Robert Zimmerman ARE-ON.
National LambdaRail A Fiber-based Research Infrastructure Vice-Provost for Scholarly Technology University of Southern California Chair of the CENIC Board.
1 Optical Research Networks WGISS 18: Beijing China September 2004 David Hartzell NASA Ames / CSC
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Dan Nae California Institute of Technology US LHCNet Update.
Dan Nae California Institute of Technology The US LHCNet Project ICFA Workshop, Krakow October 2006.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
Network Monitoring, WAN Performance Analysis, & Data Circuit Support at Fermilab Phil DeMar US-CMS Tier-3 Meeting Fermilab October 23, 2008.
GENI GEC 15 Bonnie Hurst Experimental Support Service
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
A User Driven Dynamic Circuit Network Implementation Evangelos Chaniotakis Network Engineering Group DANMS 2008 November Energy Sciences Network.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
CCIRN Meeting: Optical Networking Topic North America report Heather Boyles, Internet2
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
National LambdaRail, Inc – Confidential & Proprietary National LambdaRail 4/21/2004 Debbie Montano light the future N L.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Page 1 Page 1 Dynamic Provisioning and Survivability in Hybrid Circuit/Packet Optical Networks DoE New Projects Kick-Off Meeting Chicago, Sept
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Computing at Fermilab D. Petravick Fermilab August 16, 2007.
Copyright 2005 National LambdaRail, Inc
Networking for the Future of Science
Dynamic Network Services In Internet2
ESnet Network Engineer Lawrence Berkeley National Laboratory
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
ATLAS Tier 2 Paths Within ESnet
Fall 2006 Internet2 Member Meeting
Presentation transcript:

1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory

2 Outline Next Generation ESnet  Next Generation ESnet  Requirements  Architecture  Studying Architectural Alternatives  Reliability  Connectivity  2010 Bandwidth and Footprint goal ESnet Circuit Services  OSCARS  LHCOPN Circuits  BNL  FERMI

3  Next Generation ESnet Current IP Backbone Contract Expires End of 07 – Backbone Circuits – Hub Colocation Space – Some Site Access Circuits Acquisition – Background research in progress Implementation – Major changes may happen in 2007 No Negative LHC Impact – Should not change primary LHCOPN paths – May change/improve some US Tier 1 to US Tier 2 paths

4  Next Generation ESnet Requirements Greater reliability – Multiple connectivity at several levels Two backbones: production IP and Science Data Network (SDN) Redundant site access links Redundant, high bandwidth US and international R&E connections – Continuous, end-to-end monitoring to anticipate problems and assist in debugging distributed applications Connectivity – Footprint to reach major collaborators in the US, Europe, and Asia – Connections to all major R&E peering points – Initial build-out that satisfies near-term LHC connectivity requirements More bandwidth – Multiple lambda based network – SDN – Scalable bandwidth – Initial build-out that satisfies near-term LHC bandwidth requirements

5 Main architectural elements and the rationale for each element 1) A High-reliability IP core (e.g. the current ESnet core) to address – General science requirements – Lab operational requirements – Backup for the SDN core – Vehicle for science services – Full service IP routers 2) Metropolitan Area Network (MAN) rings to provide – Dual site connectivity for reliability – Much higher site-to-core bandwidth – Support for both production IP and circuit-based traffic – Multiply connecting the SDN and IP cores 2a) Loops off of the backbone rings to provide – For dual site connections where MANs are not practical 3) A Science Data Network (SDN) core for – Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows – Very high total bandwidth – Multiply connecting MAN rings for protection against hub failure – Alternate path for production IP traffic – Less expensive router/switches – Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements – Can meet other unknown bandwidth requirements by adding lambdas  Next Generation ESnet Architecture

6 ESnet Target Architecture: High-reliability IP Core Chicago Atlanta Seattle Albuquerque IP Core LA Denver Primary DOE Labs Possible hubs SDN hubs IP core hubs Washington DC Sunnyvale New York San Diego Cleveland

7 Metropolitan Area Rings ESnet Target Architecture: Metropolitan Area Rings New York Chicago Washington DC Atlanta Seattle Albuquerque San Diego LA Sunnyvale Denver Primary DOE Labs Possible hubs SDN hubs IP core hubs Cleveland

8 ESnet Target Architecture: Loops Off the IP Core New York Chicago Washington DC Atlanta CERN Seattle Albuquerque San Diego LA Sunnyvale Denver Loop off Backbone Primary DOE Labs Possible hubs SDN hubs IP core hubs Cleveland

9 ESnet Target Architecture: Science Data Network New York Chicago Atlanta Seattle Albuquerque Science Data Network Core San Diego LA Sunnyvale Denver Primary DOE Labs Possible hubs SDN hubs IP core hubs Cleveland Washington DC

Gbps circuits Production IP core Science Data Network core Metropolitan Area Networks International connections Metropolitan Area Rings ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings New York Chicago Washington DC Atlanta Seattle Albuquerque San Diego LA Sunnyvale Denver Loop off Backbone SDN Core IP Core Primary DOE Labs Possible hubs SDN hubs IP core hubs international connections Cleveland

11  Studying Architectural Alternatives ESnet has considered a number of technical variations that could result from the acquisition process Dual Carrier Model – One carrier provides IP circuits, a 2nd provides SDN circuits – Physical diverse Hubs, Fiber, Conduit Diverse fiber routes in some areas. Single Carrier Model – One carrier provides both SDN and IP circuits – Use multiple smaller rings to improve reliability in the face of partition risks In event of dual cut, fewer sites are isolated because of richer cross connections Multiple lambdas also provide some level of protection – May require additional engineering effort, colo space and equipment to meet the reliability requirements

12 Primary DOE Labs IP core hubs Dual Carrier Model possible hubs SDN hubs New York Chicago Washington DC Atlanta Seattle Albuquerque San Diego LA Gbps circuits Production IP core Science Data Network core Metropolitan Area Networks Sunnyvale Denver SDN Core IP Core Cleveland

13 Single Carrier Model Sunnyvale Denver Seattle San Diego Chicago Jacksonville Atlanta Albuquerque New York Boise Wash. DC router+switch site sites, peers, etc. rtr IP core sw SDN core MAN connections IP core SDN core IP core SDN core sw switch site core sw core Sites on MAN ring sites, peers, etc. peers, etc. MAN ring Lambda used for IP core Lambdas used for SDN core sw Kansas City San Antonio Cleveland SDN & IP are different lambdas on the same fiber rtr

14  Reliability Reliability within ESnet – Robust architecture with redundant equipment to reduce or eliminate risk of single or multiple failures End-to-End Reliability – Close planning collaboration with national and international partners – Multiple distributed connections with important national and international R&E networks – Support end-to-end measurement and monitoring across multiple domains (PerfSONAR) Collaboration between ESnet, GEANT, Internet2, and European NRENS Building measurement infrastructure for use by other monitoring and measurement tools

15 Primary DOE Labs IP core hubs  Connectivity SDN hubs GEANT (Europe) Asia-Pacific New York Chicago Washington DC Atlanta CERN Seattle Albuquerque Aus. Australia SDN Core IP Core San Diego LA Gbps circuits Production IP core Science Data Network core Metropolitan Area Networks International connections Sunnyvale Denver AMPATH CANARIE CERN High Speed Cross connects with Abilene Gigapops and International Peers Asia Pacific GLORIAD Asia Pacific Cleveland

16 ESnet 2007 SDN+MANs Upgrade Increment ESnet IP core hubs New hubs ESnet SDN/NLR switch/router hubs NLR PoPs ESnet SDN/NLR switch hubs ESnet Science Data Network core (10G/link)) CERN/DOE supplied (10G/link) International IP connections (10G/link) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville Atlanta KC El Paso - Las Cruces Phoenix Dallas Albuq. Tulsa Clev. Boise CERN-1 GÉANT-1 GÉANT-2 Wash DC CERN-2 CERN-3 Ogden Portland Baton Rouge Pensacola San Ant. Houston Pitts. NYC ESnet IP core sub-hubs

17 ESnet 2008 SDN+MANs Upgrade Increment ESnet IP core hubs New hubs ESnet SDN/NLR switch/router hubs NLR PoPs ESnet SDN/NLR switch hubs ESnet Science Data Network core (10G/link)) CERN/DOE supplied (10G/link) International IP connections (10G/link) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville Atlanta KC El Paso - Las Cruces Phoenix Dallas Albuq. Tulsa Clev. Boise CERN-1 GÉANT-1 GÉANT-2 Wash DC CERN-2 CERN-3 Ogden Portland Baton Rouge Pensacola San Ant. Houston Pitts. NYC ESnet IP core sub-hubs PPPL GA ORNL-ATL

18 ESnet 2009 SDN+MANs Upgrade Increment ESnet IP core hubs New hubs ESnet SDN/NLR switch/router hubs NLR PoPs ESnet SDN/NLR switch hubs ESnet Science Data Network core (10G/link)) CERN/DOE supplied (10G/link) International IP connections (10G/link) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville Atlanta KC El Paso - Las Cruces Phoenix Dallas Albuq. Tulsa Clev. Boise CERN-1 GÉANT-1 GÉANT-2 Wash DC CERN-2 CERN-3 Ogden Portland Baton Rouge Pensacola San Ant. Houston Pitts. NYC ESnet IP core sub-hubs PPPL GA ORNL-ATL Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso - Las Cruces Phoenix Dallas Albuq. Tulsa Clev. Boise CERN-1 GÉANT-1 GÉANT-2 Wash DC CERN-2 CERN-3 Ogden Portland San Ant. Pitts.

19 ESnet 2010 SDN+MANs Upgrade Increment (Up to Nine Rings Can be Supported with the Hub Implementation) ESnet IP core hubs New hubs ESnet SDN/NLR switch/router hubs NLR PoPs ESnet SDN/NLR switch hubs SDN links added since last presentation to DOE ESnet Science Data Network core (10G/link)) CERN/DOE supplied (10G/link) International IP connections (10G/link) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville Atlanta KC El Paso - Las Cruces Phoenix Dallas Albuq. Tulsa Clev. Boise CERN-1 GÉANT-1 GÉANT-2 Wash DC CERN-2 CERN-3 Ogden Portland Baton Rouge Pensacola San Ant. Houston Pitts. PPPL

20 Cleveland Primary DOE Labs IP core hubs possible new hubs SDN hubs Europe (GEANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN 30 Gbps Seattle Albuquerque Aus. Australia Science Data Network Core 30 Gbps IP Core 10 Gbps San Diego LA Production IP core SDN core MANs International connections Sunnyvale Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN 30 GbpsCanada (CANARIE) Europe (GEANT)  Bandwidth and Footprint Goal – 2010 Metropolitan Area Rings 20+ Gbps Asia- Pacific Asia Pacific High Speed Cross connects with I2/Abilene Gbps in 2011 with equipment upgrade GLORIAD

21  OSCARS: Guaranteed Bandwidth Virtual Circuit Service ESnet On-demand Secured Circuits and Advanced Reservation System (OSCARS) To ensure compatibility, the design and implementation is done in collaboration with the other major science R&E networks and end sites – Internet2: Bandwidth Reservation for User Work (BRUW) Development of common code base – GEANT: Bandwidth on Demand (GN2-JRA3), Performance and Allocated Capacity for End-users (SA3-PACE) and Advance Multi-domain Provisioning System (AMPS) Extends to NRENs – BNL: TeraPaths - A QoS Enabled Collaborative Data Sharing Infrastructure for Peta-scale Computing Research – GA: Network Quality of Service for Magnetic Fusion Research – SLAC: Internet End-to-end Performance Monitoring (IEPM) – USN: Experimental Ultra-Scale Network Testbed for Large-Scale Science Its current phase is a research project funded by the Office of Science, Mathematical, Information, and Computational Sciences (MICS) Network R&D Program A prototype service has been deployed as a proof of concept – To date more then 20 accounts have been created for beta users, collaborators, and developers – More then 100 reservation requests have been processed – BRUW Interoperability tests successful – DRAGON interoperability tests planned – GEANT (AMPS) interoperability tests planned

Dedicated virtual circuits Dynamic virtual circuit allocation  ESnet Virtual Circuit Service Roadmap Generalized MPLS (GMPLS) Dynamic provisioning of Multi-Protocol Label Switching (MPLS) circuits (Layer 3) Interoperability between VLANs and MPLS circuits (Layer 2 & 3) Interoperability between GMPLS circuits, VLANs, and MPLS circuits (Layer 1-3) Initial production service Full production service

23  ESnet Portions of LHCOPN Circuits Endpoints are VLANs on a trunk – BNL and FERMI will see 3 Ethernet VLANS from ESnet – CERN will see 3 VLANS on both interfaces from USLHCnet Will be dynamic Layer 2 circuits using AToM – Virtual interfaces on the ends will be tied to VRFs – VRFs for each circuit will be tied together using an MPLS LSP or LDP – Manually configured Dynamic provisioning of circuits with these capabilities is on the OSCARS roadmap for 2008 USLHCnet portion will be static initially – They may explore using per-vlan spanning tree

24 Physical Connections

25 BNL LHCOPN Circuits

26 FERMI LHCOPN Circuits

27 Outstanding Issues Is a single point of failure at the Tier 1 edges a reasonable long term design? Bandwidth guarantees in outage scenarios – How do the networks signal that something has failed to the applications? – How do sites sharing a link during a failure coordinate BW utilization? What expectations should be set for fail-over times? – Should BGP timers be tuned? We need to monitor the backup paths ability to transfer packets end-to-end to ensure they will work when needed. – How are we going to do it?