Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.

Slides:



Advertisements
Similar presentations
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Advertisements

Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
SA2 “Testbeds as a Service” LHCONE Meeting May 2/ Geneva, CH Jerry Sobieski (NORDUnet)
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
1 Connect | Communicate | Collaborate Motivation When utilizing special-purpose networking you need to bring the network into place end-to-end GÉANT services.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk, California Institute of Technology for the ANSE team LHCONE Point-to-Point Service.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
ATLAS Network Requirements – an Open Discussion ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Connect. Communicate. Collaborate Global On-demand Light Paths – Developing a Global Control Plane R.Krzywania PSNC A.Sevasti GRNET G.Roberts DANTE TERENA.
LHCONE Point-to-Point Circuit Experiment Authentication and Authorization Model Discussion LHCONE meeting, Rome April 28-29, 2014 W. Johnston, Senior Scientist.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
SDN and OSCARS how-to Evangelos Chaniotakis Network Engineering Group ESCC Indianapoilis, July 2009 Energy Sciences Network Lawrence Berkeley National.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 WLCG Network WG (WNWG) - Mandate, Scope, Membership … Michael Ernst Brookhaven National Laboratory.
DICE: Authorizing Dynamic Networks for VOs Jeff W. Boote Senior Network Software Engineer, Internet2 Cándido Rodríguez Montes RedIRIS TNC2009 Malaga, Spain.
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHCONE NETWORK SERVICES INTERFACE (NSI) POINT-TO-POINT TESTBED WITH ATLAS SITES Shawn McKee/Univ. of Michigan Kaushik De/Univ. of Texas Arlington (Thanks.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
Bob Jones EGEE Technical Director
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Ian Bird GDB Meeting CERN 9 September 2003
GÉANT Multi-Domain Bandwidth-on-Demand Service
LHCONE Point- to-Point service WORKSHOP SUMMARY
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
LHC Open Network Project status and site involvement
Ebusiness Infrastructure Platform
Presentation transcript:

Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013

2 This Strawman This strawman is not so much intended to be the experiment plan, but rather a framework for developing the plan, though some suggestions are included

3 Objectives Determine the readiness of NSI implementations to serve in production environments Fine-tune the user interface Determine if virtual circuits can contribute to “fixing” the site WAN-LAN problem: – by providing a direct connection to a (presumably) trusted collaborator – by drawing attention to the fact that there is a problem at the boundary – etc. Engage in an interaction of mutual education with the sites Demonstrate a robust and workable multi-domain circuit setup capability Define and adapt/integrate/deploy the science applications (CMS PhEDEx, ATLAS PANDA) with circuit services.

4 Approach Phase I: WAN – WAN – Implement and test the circuit service within the WAN and (maybe) aggregator environment Phase II: Site boundary–Regional–WAN–Regional–Site boundary – Terminate the circuits at the site boundary – Both ATLAS and CMS expressed interest in being part of the initial proof-of-concept demo Phase III: End-to-End circuits – Use circuits to connect systems / clusters at one site directly to systems / clusters at another site Phase IV: Full mesh of circuits – Build a full mesh of static circuits whose bandwidth can be set by the site – Need to be careful here that the approach will scale

5 Phase I: WAN – WAN Objective – Verify basic circuit functionality in the WANs, aggregators regionals, and exchange points Test plan – TBD Milestones 1.Draw up a test plan for the R&E domains to verify interoperating NSI implementations 2.Produce a test suite that WAN, regionals, and exchange operators can use to verify their NSI implementations Step one: Define the functionality needed for the service to be useful, 3.Identify systems within the domains of the WAN providers that can be used for testing. 4.Conduct cross-domain circuit setup testing in the WAN provider environment. Agree on and document success criteria

6 Phase II: Site boundary – Regional – WAN – Regional – Site boundary Objective – Use circuits to connect site infrastructure by circuits, but do not assume that automated circuit setup will extend into the site Test plan – Identify a set of sites and intervening networks that are willing and capable of implementing the P2P service. – Bandwidth used could be a portion of the bandwidth used for VRF infrastructure today – What to do with the circuit? Connect to a statically defined site VLAN that could be used to connect an end system Connect to a statically defined site VLAN that connects to an internal router interface that could then rout some of the hosts using the current VRF over the circuit –Routing issues vis a vis LHCONE VRFs?

7 Phase II: Site boundary – … – Site boundary Milestones 1.Michael Ernst and Tony Wildish will liaise with the Experiments to expose the idea and solicit site participation, as well as on integration of an appropriate interface to the applications. 2.Document for the site LAN engineers what is required to provide this capability both on the external and internal facing site border router interfaces. 3.Determine a baseline of performance prior to implementing the circuit service experiment and prior to sites preparing for the experiment. This may already exist in the LHC analysis systems performance stats. 4.Define experiments and metrics that will demonstrate the capability – ideally between a number of diverse sites – Tier 2, Tier 3, and geographically remote (on different continents)

8 Phase III: End-toEnd circuits Objective – Demonstrate circuits that connect end systems within sites that have a virtual circuit infrastructure and inter-domain circuit set-up capability – Determine the right level of API abstraction – Try and address concerns that circuits are complex to deploy and debug Test plan – For API determination, continue joint application and networking experts meeting at CERN ALICE especially is interested in doing clever optimizations with information from the network and co-scheduling resources.

9 Phase III: End-toEnd circuits Milestones 1.Solicit experience from the entities that already have experience in using circuit service capability ANSE, DYNES, ESnet, Internet2, GEANT, NRENs, etc. 2.Document for the site LAN engineers and the LHC resource administrators what is required to provide this capability in the interior of the site and at the end systems 3.Deploy and verify site hardware, software, and engineering capability 4.Define experiments and metrics that will demonstrate the capability – ideally between a number of diverse sites – Tier 2, Tier 3, and geographically remote (on different continents) This might be done with the existing LHC analysis systems performance monitors.

10 Phase IV: Full mesh of circuits Objective – Build a full mesh of static circuits whose bandwidth can be increased/decreased based on application/workflow needs. Test plan – TBD Milestones – Develop the test plan.

11 Resources required Sites with sufficient WAN network capacity that meaningful VC experiments can be done – (given that VCs will reduce the available capacity for best-effort traffic or what ever else the physical circuit is used for ) Sites with hardware and engineering resources sufficient to deploy an NSI domain and willingness to devote the engineering effort needed to set up and conduct experiments HEP workflow software environments willing to integrate an API that will enable them to communicate their bandwidth requirements to the network, so the bandwidth of the circuits can be determined

12 Evaluation and Results Were the Objectives achieved in a way that is seen as a net gain?