Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.

Slides:



Advertisements
Similar presentations
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Advertisements

CHANGING THE WAY IT WORKS Cloud Computing 4/6/2015 Presented by S.Ganesh ( )
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
NDN in Local Area Networks Junxiao Shi The University of Arizona
US Labs IPv6 Planning & Deployment Status Phil DeMar Oct. 4,
Module 5 - Switches CCNA 3 version 3.0 Cabrillo College.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Wide Area Networks School of Business Eastern Illinois University © Abdou Illia, Spring 2007 (Week 11, Thursday 3/22/2007)
Ch.6 - Switches CCNA 3 version 3.0.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Networking and Internetworking Devices Networks and Protocols Prepared by: TGK First Prepared on: Last Modified on: Quality checked by: Copyright 2009.
Institute of Technology, Sligo Dept of Computing Semester 3, version Semester 3 Chapter 3 VLANs.
CISCO NETWORKING ACADEMY Chabot College ELEC Router Introduction.
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Agenda Network Infrastructures LCG Architecture Management
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Chapter 1: Hierarchical Network Design
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
1 WHY NEED NETWORKING? - Access to remote information - Person-to-person communication - Cooperative work online - Resource sharing.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Common Devices Used In Computer Networks
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
LAN Switching and Wireless – Chapter 1
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
1 CHAPTER 8 TELECOMMUNICATIONSANDNETWORKS. 2 TELECOMMUNICATIONS Telecommunications: Communication of all types of information, including digital data,
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Introducing Network Design Concepts Designing and Supporting Computer Networks.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Introducing Network Design Concepts Designing and Supporting Computer Networks.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Computer Networks part II 1. Network Types Defined Local area networks Metropolitan area networks Wide area networks 2.
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
1 TeraPaths and dynamic circuits  Strong interest to expand testbed to sites connected to Internet2 (especially US ATLAS T2 sites)  Plans started in.
Advanced Computer Networks Lecturer: E EE Eng. Ahmed Hemaid Office: I 114.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Hierarchical Network Design Connecting Networks.
Cisco Router Technology. Overview Topics :- Overview of cisco Overview of cisco Introduction of Router Introduction of Router How Router Works How Router.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
Secure High Performance Networking at BNL Winter 2013 ESCC Meeting John Bigrow Honolulu Hawaii.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
“Your application performance is only as good as your network” (4)
Instructor Materials Chapter 1: LAN Design
Examples based on draft-cheng-supa-applicability-00.txt
“A Data Movement Service for the LHC”
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Wide Area Network.
CCNA R&S Overview  The CCNA Routing and Switching Boot Camp is a composite course derived from ICND1 and ICND2 content merged into a single accelerated.
Module 5 - Switches CCNA 3 version 3.0.
NTHU CS5421 Cloud Computing
Chapter 3 VLANs Chaffee County Academy
OSCARS Roadmap Chin Guok
Presentation transcript:

Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A. May 21-25, 2012 Status and Trends in Networking at LHC Tier1 Facilities Presented by Andrey Bobyshev (Fermilab)

Motivations Over decade of preparations for LHC working on all aspects of LHC computing, almost 3 years of operation Good cooperation between LHC centers on Wide-Area networking (LHCOPN, USLHCNET, ESNet, Internet2, GEANT) to support data movement Not so much on LAN issues Each site might have its own specifics but can we determine any commonalities ? Would it be useful to exchange our experiences, ideas, expectations, are we on the same track with regard to general data center networking ? 2

Authors: US-BNL-T1 (ATLAS), Brookhaven National Laboratory,U.S.A. John Bigrow, DE-KIT Tier1 (multi-LHC), Karlsruhe Institute of Technology, Germany: Bruno Hoeft, Aurelie Reymund, {bruno.hoeft, USCMS-Tier1, Fermi National Accelerator Laboratory, U.S.A. Andrey Bobyshev, Phil DeMar, Vyto Grigaliunas, 3

Our objectives: Review Status of Network architectures Access solutions Analyze trends in : 10G End systems, 40/100G inter-switch aggregation Network Virtualization/sharing resources Unified fabrics, Ethernet Fabrics, new architectures Software-Defined Networks IPv6 In our analysis we tried to be generic and not about any particular institution Initially we planned to involve more Tier1 facilities. Due to daily routine we had to lower our ambitions and have a smaller team of volunteers 4

Transition of Network Architecture Jashree Ullal (CEO at Arista Networking) “….. we are witnessing a shift from multi-tier enterprises with north south traffic patterns using tens/hundreds of servers, to two-tier cloud networks with east- west traffic patterns scaling across thousands of servers.. “ 5

Typical architecture of an LHC Tier1 6

Typical bandwidth provisioned between elements of Tier1’s data model Very low oversubscription (3:1 or 2:1) towards servers, No over-subscription at the aggregation layer QoS to ensure special treatment of selected traffic (dCache, tapes) Preferable access solution – “big” switches (C6509, BigIron, MLX) rather than ToR switches 7

Typical utilization of a LAN channel during data movement in Channel 2 Weekly Channel 1 hourly 8

Typical utilization of a LAN channel during data movement in

Connecting 10G servers Different scenarios : Small deployment, 10G ToR switches Large deployment: Aggregation switches with high capacity switching fabrics (220+ Gbps/slot) Cost connecting 10G servers goes down. A good situation with bandwidth provisioned now could change quickly 10

Converged /Unified fabrics Storage in a generic DC Disk Storage at LHC Tier1s 11

Network Virtualization Technologies 12 VLANs and VPNs ( a long time existing) Virtual Device Context (VDC) Virtual Routing and Forwarding (VRFs) Ethernet Fabrics (new L2 technology, not exactly virtualization but enables new virtualization capabilities, e.g. multiple L2 topologies)

Network Virtualization 13

Why Tier1s are interested in virtualization ? Apply security, routing, other policies per logical network, globally rather than per interfaces, devices and etc.. Segmentation and management of resources Creating different levels of services Traffic isolation per application, user group, services 14

Software Defined Network LHC Tier1 facilities participate in early deployments and R&D Usually at the perimeter to access WAN circuits via Web-GUI (ESnet OSCARS, GEANT AutoBAHN, Internet2 ION) Not mature enough for deployment in production LAN 15

IPv6 Experiments do not express any strong requirements for IPv6 yet In each organization there is a IPv6 deployment In US, OMB mandates to support IPv6 natively for all public-facing services by end FY2012. Scientific computing is not within scope Anticipate appearance of IPv6 only Tiers in 2-3 years, that means IPv6 deployment needs to be accelerated 16

Site News BNL completed upgrade of the Perimeter Defense Network from ’ s to a pair of Nexus 7010 to give a path to 100G from the perimeter inward DE-KIT has deployed at the Core two Nexus 7010’s. A border router upgrade is planned for provisioning of the 100G capability for the external link to LHCOPN DE-KIT has deployed 50 additional 10G fileservers for matching the experiment expectations of storage accessibility (intern /extern) Fermilab has reached an agreement with ESnet to establish a dedicated 100G link along with several 10G waves to a new 100G ESnet5 network. This project is planned to be completed this summer Fermilab deployed one hundred ten 10G servers, primarily dCache nodes and tape movers 17

To conclude: We would like to be more proactive in planning network resources in LAN. For this we need to understand where we are at any particular time, what we might need to anticipate on requirements, technology progress and so on. Exchange of information, ideas, solutions between LHC centers might be useful If folks from other Tier1/2/3 are interested to join this informal forum feel free to contact any person listed at the beginning of this presentation 18

END 19

Access solutions at LHC Tier1s A generic Data Center An LHC Data Center 20

Multi-plane Virtualized Architecture 22