Energy Sciences Network Enabling Virtual Science June 9, 2009

Slides:



Advertisements
Similar presentations
Research Challenges in the Emerging Hybrid Network World Tom Lehman University of Southern California Information Sciences Institute (USC/ISI)
Advertisements

1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
Lecture slides prepared for “Business Data Communications”, 7/e, by William Stallings and Tom Case, Chapter 8 “TCP/IP”.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
The Singapore Advanced Research & Education Network.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Hybrid MLN DOE Office of Science DRAGON Hybrid Network Control Plane Interoperation Between Internet2 and ESnet Tom Lehman Information Sciences Institute.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Multi-layer software defined networking in GÉANT
Grid Optical Burst Switched Networks
Distributed Systems.
Networking for the Future of Science
InterDomain Dynamic Circuit Network Demo
DOE Facilities - Drivers for Science: Experimental and Simulation Data
ESnet Network Engineer Lawrence Berkeley National Laboratory
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
The SURFnet Project Bram Peeters, Manager Network Services
Protocols and the TCP/IP Suite
ECE 4450:427/527 - Computer Networks Spring 2017
ExaO: Software Defined Data Distribution for Exascale Sciences
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Protocols and the TCP/IP Suite
OSCARS Roadmap Chin Guok
Abilene Update Rick Summerhill
Presentation transcript:

Energy Sciences Network Enabling Virtual Science June 9, 2009 Steve Cotter steve@es.net Dept. Head, Energy Sciences Network Lawrence Berkeley National Lab

The Energy Sciences Network The Department of Energy’s Office of Science is one of the largest supporters of basic research in the physical sciences in the U.S. Directly supports the research of some 15,000 scientists, postdocs and graduate students at DOE laboratories, universities, other Federal agencies, and industry worldwide Operates major scientific facilities at DOE laboratories that that have participation by the US and international research and education (R&E) community Established in 1985, ESnet is the Department of Energy’s science networking program whose responsibility is to provide the network infrastructure supporting the missions of the Office of Science Enabling a new era in scientific discovery as we tackle global issues like climate change, alternative energy/fuels and understanding the origins of the universe.

ESnet: Driven by Science Networking needs of researchers are far different than commercial users. Therefore, ESnet regularly explores the plans and processes of major stakeholders to understand: The extreme data characteristics of instruments and facilities How much data will be generated by instruments coming on-line over the next 5-10 years? The future process of science How and where will the new data be analyzed and used – that is, how will the process of doing science change over 5-10 years? SC Networking Requirements Workshops 2 workshops a year, rotating thru BES, BER, FES, NP, HEP, ASCR communities Workshop reports: http://www.es.net/hypertext/requirements.html Observing current and historical network traffic trends What do the trends in network patterns predict for future network needs?

Science: Driven by Data Scientific data sets are growing exponentially Ability to generate data is exceeding our ability to store and analyze Simulation systems and some observational devices grow in capability with Moore’s Law Petabyte (PB) data sets will soon be common: Climate modeling: estimates of the next IPCC data is in 10s of petabytes Genome: JGI alone will have 0.5 petabyte of data this year and double each year Particle physics: LHC is projected to produce 16 petabytes of data per year Astrophysics: LSST and others will produce 5 petabytes/year

ESnet Challenges ESnet is facing several challenges: Needed to scale economically to meet the demands of exponential growth in scientific data and network traffic Avg. 72% YoY since 1990 Growth is accelerating, rather than moderating Scientific collaborations were becoming more international, e.g. CERN, IPCC, EAST Tokamak, etc. Fundamental change in traffic patterns: Pre-2005: Similar to commercial traffic - millions of small flows: email, web browsing, video conferencing Post-2005: Large scientific instruments like the Large Hadron Collider in Cern or supercomputers with petaflops of computing power are generating a smaller number of very large data flows on the scale of 10’s of Gigabytes per second Today the top 1000 flows are responsible for 90% of network traffic

Solution: ESnet4 ESnet4: a unique hybrid packet- & circuit-switched network infrastructure specifically designed to handle massive amounts of data Combines the flexibility and resiliency of IP routed networks with the deterministic, high-speed capability of a circuit-switched infrastructure Used commercially available technologies to create two logically separate networks over which traffic seamlessly switches IP Network: One network for IP traffic using a single 10 Gbps circuit over which ESnet provides audio/videoconferencing and data collaboration tools Science Data Network: Circuit-switched core network consisting of multiple 10 Gbps circuits connecting directly with other high-speed R&E networks and utilizing Layer 2/3 switches

(USLHCnet: DOE+CERN funded) ESnet4 – June 2009 SINet (Japan) Russia (BINP) CERN/LHCOPN (USLHCnet: DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 MREN StarTap Taiwan (TANet2, ASCGNet) AMPATH CLARA (S. America) CUDI (S. America) Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren Transpac2 CUDI KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Internet2 Korea (Kreonet2) KAREN / REANNZ Transpac2 Internet2 Korea (kreonet2) SINGAREN Japan (SINet) ODN Japan Telecom CA*net4 GÉANT in Vienna (via USLHCNet circuit) SOX FRGP LBL SLAC JLAB PPPL Ames ANL StarLight MAN LAN (32 A of A) PNNL BNL ORNL FNAL LLNL LANL GA Yucca Bechtel-NV IARC INL NSTec Pantex SNLA DOE-ALB Allied Signal KCP SRS NREL DOE NETL NNSA ARM ORAU OSTI NOAA Lab Link MAN NLR 10G 20G SDN 10G SDN 10G IP Peering Link IP router DOE Lab Optical node SDN router

Key components tying the two networks together: Solution: ESnet4 Key components tying the two networks together: ESnet developed and implemented the On-Demand Secure Circuits and Advance Reservation System (OSCARS) protocol which spans both networks. Allows scientists to request dedicated bandwidth to move large amounts of data – up to terabytes at a time – across multiple network domains. Active participant in the perfSONAR consortium developing an open, modular infrastructure of services and applications that enables the gathering and sharing of network performance information, and facilitates troubleshooting of problems across network domains. 

OSCARS: Multi-Domain Virtual Circuit Service OSCARS Services Guaranteed bandwidth with resiliency: User specified bandwidth for primary and backup paths - requested and managed in a Web Services framework Traffic isolation: Allows for high-performance, non-standard transport mechanisms that cannot co-exist with commodity TCP-based transport Traffic engineering (for ESnet operations): Enables the engineering of explicit paths to meet specific requirements e.g. bypass congested links; using higher bandwidth, lower latency paths; etc. Secure connections: Circuits are “secure” to the edges of the network (the site boundary) because they are managed by the control plane of the network which is highly secure and isolated from general traffic End-to-end, cross-domain connections between Labs and collaborating institutions

OSCARS 0.5 Architecture (1Q09) Tomcat Web Service Interface Web Browser User Interface Web Service Interface OSCARS AAA Notification Broker Core Resource Scheduler Path Computation Eng Path Setup Modules RMI Core Manage Subscriptions Forward Notifications RMI Core Authentication Authorization Auditing RMI BSS DB AAA DB Notify DB

OSCARS 0.6 Architecture (Target 3Q09) Tomcat Web Service Interface Web Browser User Interface Web Service Interface Notification Broker OSCARS AAA Core Resource Scheduler RMI RMI Core Authentication Authorization Auditing RMI Core Manage Subscriptions Forward Notifications PCE(s) Path Setup Core Constrained Path Computations RMI Core Network Element Interface RMI BSS DB Core Constrained Path Computations RMI Core Constrained Path Computations RMI AAA DB Notify DB

Modular PCE Function R(S, D, Ts, Te, B, U) PCE1 PCE2 Tomcat Web Service Interface Web Browser User Interface Web Service Interface Notification Broker OSCARS AAA Path Setup Core Resource Scheduler RMI Core Manage Subscriptions Forward Notifications RMI Core Authentication Authorization Auditing RMI Core Network Element Interface RMI BSS DB AAA DB Notify DB Reservation: R(S, D, Ts, Te, B, U) PCE List: PCE(1, 2, …n) Topology: G0(Ts, Te) Core Constrained Path Computations RMI PCE1 Reservation: R(S, D, Ts, Te, B, U) PCE List: PCE(1, 2, …, n) Topology: G1(Ts, Te) Core Constrained Path Computations RMI PCE2 Reservation: R(S, D, Ts, Te, B, U) PCE List: PCE(1, 2, …, n) Topology: Gn-1(Ts, Te) Core Constrained Path Computations RMI PCEn Reservation: R(S, D, Ts, Te, B, U) PCE List: PCE(1, 2, …, n) Topology: Gn(Ts, Te)

ESnet’s Future Plans ESnet recently designated to received ~$70M in ARRA funds for an Advanced Networking Initiative Build a prototype wide area network to address our growing data needs while accelerating the development of 100 Gbps networking technologies Build a network testbed facility for researchers and industry Fund $5M in network research with the goal of near term technology transfer to the production network

100 Gbps Prototype Network & Testbed IP router Lab Optical node SDN router San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC SUNN SNV1 USLHCNet FNAL 600 W. Chicago Starlight ANL West Chicago MAN Atlanta MAN 180 Peachtree 56 Marietta ORNL Wash., DC Houston Nashville SOX Chicago LBL SLAC JLAB PPPL Ames ANL StarLight MAN LAN (32 A of A) PNNL BNL ORNL FNAL Lab Link MAN NLR 10G 10/20G SDN 10G IP

Experimental Optical Testbed Will consist of advanced network devices and components assembled to give network and middleware researchers the capabilities to prototype ESnet capabilities anticipated in the next decade. A community network R&D resource – the experimental facility will be open to researchers and industry to conduct research activities Multi-layer dynamic network technologies - that can support advanced services such as secure end-to-end on-demand bandwidth and circuits over Ethernet, SONET, and optical transport network technologies Ability to test the automatic classification of large bulk data flows and move them to a dedicated virtual circuit Network-aware application testing – provide opportunities for network researchers and application developers such as Grid-based middleware, cyber security services, and so on, to exploit advanced network capabilities in order to enhance end-to-end performance and security Technology transfer to production networks – ESnet, as host of the facility, will develop strategies to move mature technologies from testing mode to production service

In Summary The deployment of 100 Gbps technologies is causing us to rethink our ‘two network’ strategy and the role for OSCARS Still believe that advanced reservations and end-to-end quality of service guarantees have a role With these next generation networks, two opportunities exist: Ability to carry out distributed science at an unprecedented scale and with world-wide participation Unforeseen commercial applications that will develop using an innovative and reliable infrastructure that allows people around the world across multiple disciplines to exchange large datasets and analyses in an efficient way

Extra Slide - OSCARS Status Community approach to supporting end-to-end virtual circuits in the R&E environment is coordinated by the DICE (Dante, Internet2, Caltech, ESnet) working group Each organization potentially has their own InterDomain Controller approach (though the ESnet/Internet2 OSCARS code base is used by several organizations - flagged OSCARS/DCN) The DICE group has developed a standardized InterDomain Control Protocol (IDCP) for specifying the set up of segments of end-to-end VCs While there are several very different InterDomain Controller implementations, they all speak IDCP and support compatible data plane connections The following organizations have implemented/deployed systems which are compatible with the DICE IDCP: Internet2 Dynamic Circuit Network (OSCARS/DCN) − LHCNet (OSCARS/DCN) ESnet Science Data Network (OSCARS/SDN) − LEARN (Texas RON) (OSCARS/DCN) GÉANT2 AutoBahn System − LONI (OSCARS/DCN) Nortel (via a wrapper on top of their commercial DRAC System) − Northrop Grumman (OSCARS/DCN) Surfnet (via use of above Nortel solution) − Nysernet (New York RON) (OSCARS/DCN) University of Amsterdam (OSCARS/DCN) − DRAGON (U. Maryland/MAX) Network The following "higher level service applications" have adapted their existing systems to communicate via the user request side of the IDCP: LambdaStation (FermiLab) TeraPaths (Brookhaven) Phoebus (UMd)

Extra Slide - Production OSCARS Modifications required by FNAL and BNL Changed the reservation workflow, added a notification callback system, and added some parameters to the OSCARS API to improve interoperability with automated provisioning agents such as LambdaStation, Terapaths and Phoebus. Operational VC support As of 12/2/08, there were 16 long-term production VCs instantiated, all of which support HEP 4 VCs terminate at BNL 2 VCs support LHC T0-T1 (primary and backup) 12 VCs terminate at FNAL For BNL and FNAL LHC T0-T1 VCs, except for the ESnet PE router at BNL (bnl-mr1.es.net) and FNAL (fnal-mr1-es.net), there are no other common nodes (router), ports (interfaces), or links between the primary and backup VC. Short-term dynamic VCs Between 1/1/08 and 12/2/08, there were roughly 3650 successful HEP centric VCs reservations 1950 reservations initiated by BNL using Terapaths 1700 reservations initiated by FNAL using LambdaStation