Presentation is loading. Please wait.

Presentation is loading. Please wait.

Zeuthen, 23.04.2013 Computing Seminar Volker Gülzow, Kars Ohrenberg

Similar presentations

Presentation on theme: "Zeuthen, 23.04.2013 Computing Seminar Volker Gülzow, Kars Ohrenberg"— Presentation transcript:

1 Zeuthen, Computing Seminar Volker Gülzow, Kars Ohrenberg

2 Network Topology Hamburg

3 Topology

4 Zeuthen inbound

5 Zeuthen outbound


7 Gülzow/Ohrenberg, Zeuthen | | Seite 7 Bandwidth DFN >DFN is upgrading the optical platform of the X-WiN Contract awarded to ECI Telecom ( Migration work is currently underway >High Bandwidth Capabilities 88 wave length per fiber Up to 100 Gbps per wave length thus 8.8 Tbps per fiber! 1 Tbps Switching Fabric (aggregation of 10 Gbps lines on single 100 Gbps line)

8 Gülzow/Ohrenberg, Zeuthen | | Seite 8 Growing WiN Capacities

9 Gülzow/Ohrenberg, Zeuthen | | Seite 9 Bandwidth DFN >Significant cheaper components for 1 Gbps and 10 Gbps components -> reduced cost for VPN connections, new DFN pricing >New DFN conditions starting DESYs contract of 2 x 2 GBits will go to 2 x 5 Gbps without additional costs New cost model for Point-to-point VPNs 1) Initial installation payment 10 GBps ~ , 40 GBps ~ , 100 GBps ~ ) Annual fee now depends on the distance Hamburg <> Berlin at ~ 20% of the current costs (for 10 Gbps) Hamburg <> Berlin at ~ 80% of the current costs (for 40 Gbps) Hamburg <> Karlsruhe at ~ 45% of the current costs (for 10 Gbps) Hamburg <> Karlsruhe at ~ 150% of the current costs (for 40 Gbps)

10 Gülzow/Ohrenberg, Zeuthen | | Seite 10

11 Gülzow/Ohrenberg, Zeuthen | | Seite Geant3 topology 11

12 Gülzow/Ohrenberg, Zeuthen | | Seite 12

13 Gülzow/Ohrenberg, Zeuthen | | Seite Networking for LHC 13

14 Gülzow/Ohrenberg, Zeuthen | | Seite 14 LHC Computing Infrastructure >WLCG in brief: 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide

15 Gülzow/Ohrenberg, Zeuthen | | Seite 15 The LHC Optical Private Network >The LHCOPN (from The LHCOPN is the private IP network that connects the Tier0 and the Tier1 sites of the LCG. The LHCOPN consists of any T0-T1 or T1-T1 link which is dedicated to the transport of WLCG traffic and whose utilization is restricted to the Tier0 and the Tier1s. Any other T0-T1 or T1-T1 link not dedicated to WLCG traffic may be part of the LHCOPN, assuming the exception is communicated to and agreed by the LHCOPN community >Very closed and restricted access policy >No Gateways

16 Gülzow/Ohrenberg, Zeuthen | | Seite 16 LHCOPN Network Map

17 Gülzow/Ohrenberg, Zeuthen | | Seite Data transfers >Global transfer rates are always significant ( Gb/s) – permanent on-going workloads >CERN export rates driven (mostly) by LHC data export >By Ian Bird, CRRB,4/13 April 16, 2013 CERN Tier 1s Global transfers

18 Gülzow/Ohrenberg, Zeuthen | | Seite Resource usage: Tier 0/1 April 16, 2013 By Ian Bird

19 Gülzow/Ohrenberg, Zeuthen | | Seite Resource use vs pledge CCRC F2F 10/01/ CERN Tier 1s

20 Gülzow/Ohrenberg, Zeuthen | | Seite Resource vs pledges: Tier 2 April 16, 2013 By Ian Bird

21 Gülzow/Ohrenberg, Zeuthen | | Seite Connectivity (100 Gb/s) April 16, 2013 Latency measured; No problems anticipated By Ian Bird

22 Gülzow/Ohrenberg, Zeuthen | | Seite 22 Computing Models Evolution >The original MONARC model was strictly hierarchical >Changes introduced gradually since 2010 >Main evolutions: Meshed data flows: Any site can use any other site as source of data Dynamic data caching: Analysis sites pull datasets from other sites on demand, including from Tier-2s in other regions Remote data access >Variations by experiment >LHCOPN only connects T0 and T1

23 Gülzow/Ohrenberg, Zeuthen | | Seite 23 LHC Open Network Environment >With the successful operation of the LHC accelerator and the start of the data analysis, there has come a re-evaluation of the computing and data models of the experiments >The goal of LHCONE (LHC Open Network Environment) is to ensure better access to the most important datasets by the worldwide HEP community >Traffic patterns have altered to the extent that substantial data transfers between major sites are regularly being observed on the General Purpose Networks (GPN) >The main principle is to separate the LHC traffic from the GPN traffic, thus avoiding degraded performance >The objective of LHCONE is to provide entry points into a network that is private to the LHC T1/2/3 sites. >LHCONE is not intended to replace LHCOPN but rather to complement it

24 Gülzow/Ohrenberg, Zeuthen | | Seite 24 LHCONE Achitecture

25 Gülzow/Ohrenberg, Zeuthen | | Seite 25 LHCONE VRF Map (from Bill Johnston, ESNet)

26 Gülzow/Ohrenberg, Zeuthen | | Seite 26 LHCONE: A global Infrastructure

27 Gülzow/Ohrenberg, Zeuthen | | Seite 27 LHCONE Activities >With the above in mind, LHCONE has defined the following activities: >VRF-based multipoint service: a quick-fix to provide multipoint LHCONE connectivity, with logical separation from R&E GPN >Layer 2 multipath: evaluate use of emerging standards such as TRILL (IETF) or Shortest Path Bridging (SPB, IEEE 802.1aq) in WAN environments >Openflow: There was wide agreement that SDN is the most probable candidate technology for LHCONE in the long-term (but needs more investigations) >Point-to-point dynamic circuits pilots >Diagnostic Infrastructure: each site to have the ability to perform E2E performance tests with all other LHCONE sites

28 Gülzow/Ohrenberg, Zeuthen | | Seite 28 Software-Defined Networking (SDN) >Is a form of network virtualization in which the control plane is separated from the data plane and implemented in a software applicationnetwork virtualizationcontrol planedata plane >This architecture allows network administrators to have programmable central control of network traffic without requiring physical access to the network's hardware devices >SDN requires some method for the control plane to communicate with the data plane. One such mechanism is OpenFlow which is a standard interface for controlling computer networking switches computer networking

29 Gülzow/Ohrenberg, Zeuthen | | Seite 29 LHCONE VRF >Implementation of multiple logical router instances inside a physical device (virtualized Layer 3) >Logical control plane separation between multiple clients >VRF in LHCONE: regional networks implement VRF domains to logically separate LHCONE from other flows >BGP peerings used inter-domain and to end-sites

30 Gülzow/Ohrenberg, Zeuthen | | Seite 30 Multipath in LHCONE >Multipath problem: How to use the many (transatlantic) paths at Layer 2 among the many partners, e.g. USLHCNet, GEANT, SURFnet, NORDUnet,... >Layer 3 (VRF) can use some BGP techniques MED, AS padding, local preference, restricted announcements works in a reasonably small configuration, not clear it will scale up to O(100) end-sites >Some approaches to Layer 2 mulitpath: IETF: TRILL (TRansparent Interconnect of Lots of Links) IEEE: 802.1aq (Shortest Path Bridging) >None of these L2 protocols is designed for WAN! R&E needed

31 Gülzow/Ohrenberg, Zeuthen | | Seite 31 LHCONE Routing Policies >Only the networks which are announced to LHCONE are allowed to reach the LHCONE Networks announced by DESY: /24, /21, /24 (Tier-2 Hamburg) /21, /24 (Tier-2 Zeuthen) /22, /24, /24, /24 (NAF Hamburg) /23, /24, /24, /24 (NAF Zeuthen) e.g. networks announced by CERN: /16 but not >Only these networks will be reachable via the LHCONE >Other traffic uses the public, general purpose networks >Asymmetric routing should be avoided as this will cause problems for traffic passing (public) firewalls

32 Gülzow/Ohrenberg, Zeuthen | | Seite 32 LHCONE - the current status >Currently ~100 network prefixes German sites currently participating in LHCONE DESY, KIT, GSI, RWTH Aachen, Uni Wuppertal Europe CERN, SARA, GRIF (LAL + LPNHE), INFN, FZU, PIC,... US: AGLT2 (MSU + UM), MWT2 (UC), BNL,... Canada TRIUMF, Toronto,... Asia ASGC, ICEPP,... >Detailed monitoring via perfSONAR

33 Gülzow/Ohrenberg, Zeuthen | | Seite 33 LHCONE Monitoring

34 Gülzow/Ohrenberg, Zeuthen | | Seite 34 R&D Network Trends >Increased multiplicity of 10Gbps links in the major R&E networks: GEANT, Internet2, ESnet, various NREN,... >100Gbps Backbones in place and transition now underway GEANT, DFN,... CERN - Budapest 2 X 100G for LHC Remote Tier- 0 Center >OpenFlow (Software-defined switching and routing) taken up by much of the network industry and R&E networks

35 Gülzow/Ohrenberg, Zeuthen | | Seite 35 WAN + LHCONE Infrastructure at DESY

36 Gülzow/Ohrenberg, Zeuthen | | Seite 36 Summary >The LHC computing and data models continue to evolve towards more dynamic, less structured, on- demand data movement thus requiring different network structures >LHCOPN and LHCONE may merge in the future >With the evolution of the new optical platforms bandwidth will get more affordable

Download ppt "Zeuthen, 23.04.2013 Computing Seminar Volker Gülzow, Kars Ohrenberg"

Similar presentations

Ads by Google