Trial of the Infinera PXM Guy Roberts, Mian Usman.

Slides:



Advertisements
Similar presentations
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
Advertisements

Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
CS Summer 2003 Lecture 14. CS Summer 2003 MPLS VPN Architecture MPLS VPN is a collection of sites interconnected over MPLS core network. MPLS.
Transport SDN: Key Drivers & Elements
Bandwidth on Demand Dave Wilson DW238-RIPE
Applications of MPLS in GEANT Agnès Pouélé Applications of MPLS in GÉANT MPLS WORLD CONGRESS 2002 Paris 7th February 2002 Agnes.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
LHCOPN & LHCONE Network View Joe Metzger Network Engineering, ESnet LHC Workshop CERN February 10th, 2014.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE meeting – LBL Berkeley (USA) Status update LHCONE L3VPN 1 st /2 nd June 2015.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Bas Kreukniet, Sr Network Specialist at SURF SARA NL-T1 Expectations, findings, and innovation Geneva Workshop 10 Februari 2014.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
Introduction & Vision. Introduction MANTICORE provides a software implementation and tools for providing and managing routers and IP networks as services.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Networks ∙ Services ∙ People Mian Usman LHCOPN/ONE meeting – Amsterdam Status update LHCONE L3VPN 28 th – 29 th Oct 2015.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
The GÉANT network: addressing current and future needs of the HEP community Vincenzo Capone, Mian Usman CHEP2015 – April.
Connect communicate collaborate Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE’09, Barcelona, 21 st -25 th September 2009.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
Connect communicate collaborate The latest on GÉANT 100G+ … (Terabit capacity) Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
Networks ∙ Services ∙ People Guy Roberts Transport Network Architect, GÉANT TNC16 13 th June 2016 GÉANT Network, Infrastructure and Services.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
NORDUnet Nordic Infrastructure for Research & Education NORDUnet’s views on cloud and cloud providers LHCOPN/ONE meeting Taipei, March 2016.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Networks ∙ Services ∙ People Mian Usman IP Network Architect, GÉANT TNC16 13 th June 2016 GÉANT Transport Network Evolution.
LCG France Network Network 11/2010 Centre de Calcul de l'IN2P3/CNRS.
T0-T1 Networking Meeting 16th June Meeting
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
LHCONE status and future
GÉANT LHCONE Update Mian Usman Network Architect
LHCOPN update Brookhaven, 4th of April 2017
CMS ATLAS LHC and Clouds
The LHCONE network and L3VPN status update
LHCONE Point- to-Point service WORKSHOP SUMMARY
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
Integration of Network Services Interface version 2 with the JUNOS Space SDK
LHC Open Network Project status and site involvement
K. Schauerhammer, K. Ullmann (DFN)
Tony Cass, Edoardo Martelli
an overlay network with added resources
Presentation transcript:

Trial of the Infinera PXM Guy Roberts, Mian Usman

LHC Workshop Recap Rather than maintaining distinct networks, the LHC community should aim to unify its network infrastructure Traffic aggregation on few links Concerns If the T1s and T2s upgrade to 100G, then the global infrastructure needs to follow LHCONE Evolution Currently LHCONE exists side-by-side with general R&E infrastructure Traffic is segregated but what's the real benefit?

LHCOPN NRENs are providing LHC with point-to-point Layer2 circuits LHC centers built a virtual routed network out of the circuits LHC centers are providing Network Services to each other: CERN is providing un-restricted transit Some centers are providing limited transit Some LHC centers are peering NRENs support individual link operations & management LHC Sites are responsible for network management (layer 3 configurations) including operations, monitoring, troubleshooting, capacity planning, security management, AUP enforcement, etc.

LHCONE – as it exists at the moment LHCONE is currently setup as an overlay VRF on existing NREN infrastructure which are interconnected via regional networks and Open Exchanges NRENs provide the network including core links and routers as a virtual overlay on their regular infrastructure NRENs have a peering or transit relationship with each other LHC centers are strictly users of the services Restrictions apply to the advertised IP Space LHCONE infrastructure includes dedicated access, Trans-Atlantic and some backbone links

Use case– LHCOPN/LHCONE LHCOPN: High capacity TDM, but not flexible LHCONE: flexibility of L3 VPN, but no reserved bandwidth Can we give users the best of both solutions?

 Packet Switching Module  16x10GE or 16x1GE  1 x 100GE  Built-In 200G Packet Switch  Enables QoS  Packet Classification  VLAN and QinQ  MPLS-TP on network side  MEF services  Point-to-point services EP-LINE, EVP-LINE  Multipoint services EP/EVP-LAN, EP/EVP-TREE PXM – available 1Q 2015 DTN-X Family

Open Transport Switch  Open Transport Switch (OTS) provides an Open Flow interface to the DTN-X  OTS adds transport extensions to the Open Flow interface  Initially REST based interface  On Infinera roadmap for Q (R12), however, early versions available now for evaluation.  Using OTS to control OTN transport circuits – not that useful OTS would make a useful API to allow big science users to request EVP-LINE and EVP-LAN services via PXM

NREN GÉANT 500G fiber cloud NREN Tier 1 site Wigner Data Center User service requests Tier 2 Campus CERN Packet OPX optimized EPL & EVPL Services over Layer 1 VPN  MEF type EVP-LAN or EVP-Line  pool of provisionable OTN B/W  OTS REST/OF API to allow experimenter’s applications to manage connectivity OF/REST

NREN GÉANT 500G fiber cloud NREN Tier 1 site Packet OPX optimized EPL & EVPL Services over Layer 1 VPN Tier 1 site NREN

GÉANT 500G fiber cloud NREN Tier 1 site Tier 2 Campus Packet OPX optimized EPL & EVPL Services over Layer 1 VPN Tier 1 site Tier 2 Campus Tier 1 site

DFN LHCONE GÉANT 500G fiber cloud JANET DE-KIT Tier 2 Campus Packet OPX optimized EPL & EVPL Services over Layer 1 VPN UK RAL NORDUNET LHCONE ESnet LHCONE Internet2 LHCONE CANARIE LHCONE NL-T1 (via Surfnet) NDGF (via NORDUNET) TRIUMF(via CANARIE –ANA) BNL (via ESnet) FERMI (via ESNet) NL-T1 (via Surfnet) NDGF (via NORDUNET) TRIUMF(via CANARIE –ANA) BNL (via ESnet) FERMI (via ESNet) Tier 2 Campus

PXM Evaluation and trial  Field trial in Q  Pre-release PXM cards could be trialled. Evaluation and service development  Field trial integration of PXM and OTS.  Evaluate OTS and PXM together.

LHC Workshop Recap Rather than maintaining distinct networks, the LHC community should aim to unify its network infrastructure Traffic aggregation on few links Concerns If the T1s and T2s upgrade to 100G, then the global infrastructure needs to follow LHCONE Evolution Currently LHCONE exists side-by-side with general R&E infrastructure Traffic is segregated but what's the real benefit?

Questions??

GEANT IP Backbone

GEANT Dark Fibre Foot Print