Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.

Slides:



Advertisements
Similar presentations
Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
Advertisements

A. Sim, CRD, L B N L 1 ANI and Magellan Launch, Nov. 18, 2009 Climate 100: Scaling the Earth System Grid to 100Gbps Networks Alex Sim, CRD, LBNL Dean N.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
Abstraction and Control of Transport Networks (ACTN) BoF
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
Common Devices Used In Computer Networks
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
NETWORKING COMPONENTS AN OVERVIEW OF COMMONLY USED HARDWARE Christopher Johnson LTEC 4550.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Hybrid MLN DOE Office of Science DRAGON Hybrid Network Control Plane Interoperation Between Internet2 and ESnet Tom Lehman Information Sciences Institute.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
JUNIPER TECHNOLOGY UPDATE Debbie Montano Jan 31, 2011.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
OS Services And Networking Support Juan Wang Qi Pan Department of Computer Science Southeastern University August 1999.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Dynamic Lightpath Services on the Internet2 Network Rick Summerhill Director, Network Research, Architecture, Technologies, Internet2 TERENA May.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
MPLS-TP INTER-OP: WHAT, WHY, AND HOW? General Objectives for MPLS-TP Inter-Op Test Program at UNH-IOL.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
SDN and OSCARS how-to Evangelos Chaniotakis Network Engineering Group ESCC Indianapoilis, July 2009 Energy Sciences Network Lawrence Berkeley National.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
1 CHEETAH - CHEETAH – Circuit Switched High-Speed End-to-End Transport ArcHitecture Xuan Zheng, Xiangfei Zhu, Xiuduan Fang, Anant Mudambi, Zhanxiang Huang.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
Introduction to Avaya’s SDN Architecture February 2015.
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
ORNL Site Report ESCC Feb 25, 2014 Susan Hicks. 2 Optical Upgrades.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
Instructor Materials Chapter 1: LAN Design
Dynamic Network Services In Internet2
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Dynamic Circuit Service Hands-On GMPLS Engineering Workshop
Cloud Computing: Concepts
OSCARS Roadmap Chin Guok
Presentation transcript:

Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics Nuclear Physics The ARRA ANI Network Testbed Project September 29, 2009 Brian L. Tierney

Overview Project Start Date: September, 2009 Funded by ARRA for 3 years –Hopefully will continue beyond ARRA Designed, built, and operated by ESnet staff 1 of 3 ARRA “Advanced Network Initiative” (ANI) projects in the DOE –ANI 100G Prototype –ANI Testbed –4 ANI research projects 2

ARRA/ANI Testbed Goals Enable network, middleware and application end-to-end R&D at 100G Configurable Breakable –Isolated from the production network Reservable Easy to reset to known state A community network R&D resource –open to researchers and industry to conduct experiments 3

Multi-Layer Architecture Capability Planes 4

Testbed Requirement: Support Network Research Support research on: –Data Plane –Control Plane –Management Plane –AA Plane –Service Plane Ability to do multi-domain hybrid networking –Support R&D on multi-layer and multi-domain control and signaling at layers 1-3 Ability to reserve and manage each components Support protection and recovery research Support interoperability testing of multi-vendor 100G network components Provide access to all available monitoring data 5

Testbed Requirement: Support Middleware/Application Research Ability to support end-to end experiments at 100 Gbps –Connect to Magellan systems at NERSC and ALCF Ability to reserve and manage components –End hosts / storage as well as network Use virtual machine technology to support protocol and middleware research 6

ANI 100G Prototype Network The “100G Prototype” network will connect NERSC, ALCF, OLCF, and MANLAN at 100Gbps –Not for production ESnet traffic Testbed will use 100G waves from the prototype network Testbed will use 100G routers from the prototype network 7

Nationwide 100G Prototype Network Sunnyvale NYC Nashville Chicago OLCF / ORNL 100 G ALCF / ANL NERSC Magellan 8

ANI Testbed “Nodes” Each testbed node will consist of: –DWDM device (Layer 0-1) e.g.: Infinera, Ciena, Adva, etc. GMPLS-enabled –Ethernet Services Router (layer 2-3) e.g.: OSCARS compatible Juniper, Cisco, etc. MPLS-enabled –Test and measurement hosts Virtual Machine based test environment 10G test hosts initially –Eventually 100G from Acadia 100G NIC project 9

Phased Approach Phase 1: –4 nodes in a lab at LBL –Multiple10G waves between each node (> 40) Combination of Ethernet and SONET –Full mesh connectivity –Ability to simulate multiple domains –MPLS enabled routers –Secure access via a gateway node from anywhere –Scheduled via simple calendar 10

Phase 1: Tabletop Testbed in Lab N1 N2 N3 N4 Notes: Can configured in multiple ways, including example above with 3 possible paths from N1 to N4 Can configured in multiple ways, including example above with 3 possible paths from N1 to N4 Can be used to simulate multiple domains Can be used to simulate multiple domains 11

Phased Approach Phase 2: –Nodes are deployed on the WAN –1 or more 10/100G Ethernet circuits between each node Will upgrade segments to 100G as hardware becomes available –Partial physical mesh connectivity Logical full mesh possible via VLANs –Ability to simulate multiple domains –MPLS enabled routers –Scheduled using OSCARS-like system 12

Phase 2 Deployment Assumptions The following architecture assumes the following comes out of the ESnet ANI RFPs: –“Dim fiber” between Sunnyvale and NERSC –“Dim fiber” between Chicago and ANL –A single 100G service between Sunnyvale, Chicago, Nashville, and New York Dark fiber ring in Long Island connecting AofA to BNL Note that the ANI architecture presented here will change if these assumptions are not true 13

ANI Prototype Network – current plan 14

Testbed Access Testbed will be accessible to anyone: –DOE researchers –Other government agencies –Industry Must submit a short proposal to the testbed review committee –Committee will be made up of members from the R&E community and industry 15

Timeline: Phase 1 Project Start Hardware selection 16 Milestone Oct 2009 Jan 2010 Apr 2010 Jul 2010 Oct 2010 Jan 2011 Initial Hardware Ordered Testbed installed and configured Simple management software developed Initial ‘table top’ node hardware setup at LBL Review committee established Testbed available to DOE-funded researchers Hold Testbed Workshop Start accepting Testbed proposals

Timeline: Phase 2 WAN deployment planning based on ANI Prototype awards 17 Milestone Oct 2010 Jan 2011 Apr 2011 Jul 2011 Oct 2011 Jan 2012 Connect Testbed to Magellan 100G WAN Testbed Available to researchers Testbed available to 1 st group of non-DOE researchers Accept 2 nd round of Testbed Proposals Refine Testbed Management Software 2 nd round Testbed users get access OSCARS-like interface to schedule resources deployed

Testbed to support DOE ARRA-funded Research Projects Climate 100 (LLNL, LBL, ANL) –Project Goal: Scaling the ESG to 100G –Testbed role: provide interconnect between “Magellan project” resources at ANL and NERSC Advanced Network and Distributed Storage Laboratory (OSG: U Wisc, FNAL, etc) –Project Goal: enhance Virtual Data Toolkit (VDT) data management tools to effectively utilize 100G networks –Testbed role: provide interconnect between “Magellan project” resources at ANL and NERSC 100G NIC (Acadia and Univ New Mexico) –Project Goal: produce a host NIC capable of 100 Gbps –Testbed role: Provide test environment for this device 100G FTP (BNL/Stony Brook University) –Project Goal: design 100G transport protocols and tools to enable 100G FTP server to client data transfer –Testbed role: Provide a 100G test environment 18

Testbed to support DOE Network Research Projects “Resource optimization in hybrid core networks with 100G links” (Univ Virginia) –Project Goal: Develop methods for automatic identification and tagging of flows that should be moved from the IP network to a dynamic virtual circuit –Testbed role: Provide a test environment for validating these these methods “Integrating Storage Resource Management with Dynamic Network Provisioning for Automated Data Transfer” (BNL, LBL) –Project Goal: Integration of dynamic virtual circuits into BestMAN (Berkeley Storage Manager / SRM v2.2) –Testbed role: Provide a 100G test environment for verifying this work “Provisioning Terascale Science Apps using 100G Systems” (UC Davis) –Project Goal: Advanced path computation algorithms for efficiency and resiliency –Testbed role: Provide a control plane test environment for these algorithms using OSCARS 19

Testbed to support DOE Network Research Projects (cont.) Virtualized Network Control (ISI, ESnet, Univ New Mexico) –Project Goal: multi-layer, multi-technology dynamic network virtualization –Testbed role: provide a control plane test environment for experiments using hybrid networking “Sampling Approaches for Multi-domain Internet Performance Measurement Infrastructures to Better Serve Network Control and Management” (Ohio State Univ) –Project Goal: Develop new network measurement sampling techniques and policies –Testbed role: Provide a network environment for initial deployment 20

For More Information Contact:

EXTRA SLIDES 22

Objectives Provide a easily re-configurable high-performance network research environment to enable researchers to accelerate the development and deployment of 100G networking through prototyping, testing, and validation of advanced networking concepts. Provide experimental network environments for vendors, IPSs, and carriers to perform inter-operability tests necessary to implement end-to-end heterogeneous 100G networks. Provide support for prototyping middleware and software stacks to enable the development and testing of 100G science applications. 23

Relation to the Magellan Project High-speed storage resources for the ANI Testbed will be provided by the ARRA-funded Magellan Project –DOE computing project funded at $33M –Goal is 100G end-to-end Magellan is: –a research and development effort to establish a nationwide scientific mid-range distributed computing and data analysis testbed. –Consists of two sites NERSC / LBNL ALCF / ANL –Will provide multiple 10’s of teraflops and multiple petabytes of storage, as well as appropriate cloud software tuned for moderate concurrency. 24