KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu.

Slides:



Advertisements
Similar presentations
TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Advertisements

Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) MDM Monitoring Steinbuch Centre for Computing.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) DE-KIT Monitoring Steinbuch Centre for Computing.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
Network & Services Overview June 2012 Jeff Ambern
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Agenda Network Infrastructures LCG Architecture Management
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
1 ESnet Network Measurements ESCC Feb Joe Metzger
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
LHCONE in North America September 27 th 2011, LHCOPN Eric Boyd, Internet2.
IVY Plus 2012 at Harvard Presented by Kathy Frazer.
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
Service Challenge Meeting - ISGC_2005, 26. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH.
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE meeting – LBL Berkeley (USA) Status update LHCONE L3VPN 1 st /2 nd June 2015.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Clusterix:National IPv6 Computing Facility in Poland Artur Binczewski Radosław Krzywania Maciej Stroiński
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
Vytautas Valancius, Nick Feamster, Akihiro Nakao, and Jennifer Rexford.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft STEINBUCH CENTRE FOR COMPUTING - SCC
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
High performance Brocade routers
Networks ∙ Services ∙ People Mian Usman LHCOPN/ONE meeting – Amsterdam Status update LHCONE L3VPN 28 th – 29 th Oct 2015.
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
1 | Bruno Hoeft, KIT / AutoKNF - LHC(OPN/ONE), 06/17/13 | Karlruhe Institute of Technology (KIT) Bruno Hoeft (KIT/SCC) AutoKNF – status June 2013.
The GÉANT network: addressing current and future needs of the HEP community Vincenzo Capone, Mian Usman CHEP2015 – April.
Network Connections for the Worldwide LHC Computing Grid Tony Cass Leader, Communication Systems Group Information Technology Department 11 th December.
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
Connect communicate collaborate LHCONE in Europe DANTE, DFN, GARR, RENATER, RedIRIS LHCONE Meeting Amsterdam 26 th – 27 th Sept 11.
7 May 2002 Next Generation Abilene Internet2 Member Meeting Washington DC Internet2 Member Meeting Washington DC.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
1 | Bruno Hoeft, TNC2012 | Karlruhe Institute of Technology (KIT) Bruno Hoeft (KIT/SCC) Co-Authors: Kostas Stamos (CTI), Tangui Coulouarn (UNI-C)
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association / DFN–Verein e.V
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
ISGC_2005, 27. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH Institute for Scientific.
Service Challenge, 2. Dec Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Bruno Hoeft Progress at GridKa Forschungszentrum Karlsruhe GmbH.
Network evolving 2020 and beyond
Joint Genome Institute
Luca dell’Agnello INFN-CNAF
DE-KIT(GridKa) -- LHCONE
“A Data Movement Service for the LHC”
GÉANT LHCONE Update Mian Usman Network Architect
SURFnet network upgrade plans for next 5 years
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
K. Schauerhammer, K. Ullmann (DFN)
Tony Cass, Edoardo Martelli
ESnet Network Measurements ESCC Feb Joe Metzger
an overlay network with added resources
Presentation transcript:

KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC 100G Bruno Hoeft / KIT Andreas Petzold / KIT

2Steinbuch Centre for Computing Agenda History 100G between KIT and Jülich (2010) 100G between KIT and University Dresden 100G between KIT and CERN SC13 – 100G from KIT to SC13 (Caltech-booth) (DE-)KIT Motivation DE-KIT -- Deployment Steps Upgrade Sketch At Upgrade Involved Splitting into Parts 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

3Steinbuch Centre for Computing History Huawei 100GE DWDM … Düsseldorf Köln Bonn Koblenz Frankfurt Wiesbaden Mainz Darmstadt Mannheim FZ Jülich KIT Karlsruhe Dernbach Gernsheim Porz Naurod 88 km 60 km 96 km 64 km 29 km 93 km 17 km J– J– K– K– ms-fzj1 hades-fzj1 ms-kit1ti-kit1 … Huawei 100GE DWDM Heidelberg max. 96,5Gbps 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 440 km 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

4Steinbuch Centre for Computing History 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 2012 – Cisco NEXUS 7000 M2 Lincard EFT 2 * 100G between two VDCs 10 km Distance of the two VDCs (fiber drum) 42 10G host-interfaces to fill the 200G between the VDCs (memory to memory transfers (iperf) Full Internet routing table applicable 2013 – KIT – TU Dresden 97,63 G 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

5Steinbuch Centre for Computing History 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 2013 – Cisco NEXUS 7000 M2 Lincard EFT 2013 – KIT – UNI Dresden 2013 – KIT - CERN KIT – SC13 Caltech booth 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

6Steinbuch Centre for Computing Caltech SC13 WAN Demo Storage to storage file transfer File transfer via FastDataTransfer (FDT) 20 Fileserver different capacities 10G booth 3 Fileserver 1 or 2 40G NIC SSD Harddisk 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

7Steinbuch Centre for Computing WAN – KIT  SC13 Echostream3 40G sc3 sc5 40G Caltech SC13 booth DFN contributes 100G link Karlsruhe - Amsterdam DFN PoP Karlsruhe ECI 100G to DFN PoP : peering to Géant Géant PoP Frankfurt to Géant PoP Géant Peering via MX960 with SURFnet Nexus 7010 : 100G linecard (LR-4) … ANA-100G (transatlantic MANLAN (Internet2) and via Esnet to SC13 Denver 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

8Steinbuch Centre for Computing Echostream 3 40G sc3 sc5 40G Caltech SC13 booth WAN – KIT  CERN 100G 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

9Steinbuch Centre for Computing Echostream 3 40G sc3 sc5 40G Caltech SC13 booth WAN – KIT  CERN Monalisa 70Gbps at monalisa peaks up to 75Gbps Throughput: 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

10Steinbuch Centre for Computing Agenda History 100G between KIT and Jülich (2010) Testbed Topology Overview SC13 – 100G from KIT to SC13 (Caltech-booth) (DE-)KIT Motivation DE-KIT -- Deployment Steps Upgrade Sketch At Upgrade Involved Splitting into Parts 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

11Steinbuch Centre for Computing DE-KIT / GridKa German LHC Tier-1 status CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) Poland (T2)FZU (Prag) (T2) NL-T1 (T1) GridKa (tier-1) Provider Edge Router DWDM DFN R-ir-cn-1-1 KIT Provider Edge Router Legende: 10G Ethernet 10G VPN End-to-End LHCOPN: T0 – T1 LHCONE: T1 – T[123] R-ir-cn-1-2 KIT Provider Edge Router 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

12Steinbuch Centre for Computing Proposed upgraded datarate after LS1 Expected 500MB/sec  approx. 5Gbps + additional experiment data  2 * 10GE LHCOPN Increased datarate between Tier-1 – Tier-1 communication Increased datarate between Tier-1 and Tier-[23] Upgrade Motivation LHCone utilization  15Gbps Status and Trends in Networking at LHC Tier1 Facilities (Andrey Bobyshev (FNAL) at CHEP 2012) 1*10 GE GridKa-->LHCOPN T010GE 3*10 GE Gridka-->DE-KIT to T115GE 2*10 GE GridKa--> LHCONE (T[123]) 15GE +Internetverbindung - KIT10GE 40GE General purpose internet traffic LHCOPN utilization  10Gbps LHCOPN link was saturated even during LS1 LHCone  max. usage 15Gbps DE-KIT to Tier-[123] G eneral P urpose I nternet  max. usage 10Gbps proposed upgrade 20GE 50GE 20GE GE 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

13Steinbuch Centre for Computing Current GridKa deployment Direct link DE-KIT to Poznan (1G) to LHCONE (100G) CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) Poland (T2) FZU (Prag) (T2) NL-T1 (T1) GridKa (tier-1) Provider Edge Router DWDM DFN R-ir-cn-1-1 R-ir-cn-1-2 CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) NL-T1 (T1) Poland (T2)FZU (Prag) (T2) Direct link DE-KIT to IT-INFN-CNAF (10G) to LHCONE (100G) Direct link DE-KIT to FR-CCIN2P3 (10G) to LHCONE (100G) Direct link DE-KIT to NL-T1 (10G) to LHCONE (100G) Direct link DE-KIT to FZU (Prague) (10G) to LHCONE (100G) Migration to-LHCONE Direct link DE-KIT to CERN (2*10G) LHCOPN 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

14Steinbuch Centre for Computing GridKa deployment after KIT 100G upgrade Second deployment phase (2015): 1* 100G to Frankfurt 1* 100G to Erlangen “Traffic shaping“ to 50Gpbs at each 100G link Border router of KIT receiving 100G uplinks Project will get WAN access through KIT provider edge router 2* 10G LHCOPN link will stay separate VPN DE-KIT -- CERN CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) NL-T1 (T1) Poland (T2)FZU (Prag) (T2) LHCONE 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

15Steinbuch Centre for Computing KIT – upgrade perimeter Areas needed to be included in the 100G upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces DFN 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

16Steinbuch Centre for Computing DWDM KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces DWDM upgrade 4* 100GE transponder Distance 10km by air / 28 to 35km by fiber KIT Campus - North KIT Campus - South 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

17Steinbuch Centre for Computing DWDM KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces 4* 100GE transponder Firewall 22* 100GE interfaces 6* 100GE transponder 4 backbone routers (2* VSS) (KIT core router) 16* 100GE interfaces 4* 100GE transponder 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

18Steinbuch Centre for Computing DWDM KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces 4* 100GE transponder firewall 22* 100GE interfaces 6* 100GE transponder 4 backbone routers (2* VSS) (KIT core router) 16* 100GE interfaces 4* 100GE transponder Routers of different projects (per project): 5* 100GE interfaces 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

19Steinbuch Centre for Computing Splitting the 100G upgrade into affordable parts 2013 Project LSDF 100G link to BioQuant Heidelberg via Belwue 2014 Project GridKa 100G link to LHCONE via DFN 2015/6 KIT-CN – 2* provider edge router 2016 KIT-CS – 2* provider edge router DWDM ? Firewall 2016/7 KIT core router (4* router) DWDM ? 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

20Steinbuch Centre for Computing Upgrade requires new border router  market survey Options CISCO ISP router (CRS-3/ASR9000) Catalyst 6800  too late – no 100G available yet Nexus N7K/7700  is Nexus WAN capable? Alcatel-Lucent / Arista / Brocade / Juniper / … Limitations of different router/switch platforms Single stream 92Gbps (history) 100G Interface bandwidth limitation  single stream 12.75Gbps Transceiver support Routing table >1M IPv4 and IPv6 entries Snmp support  read + write Sflow/netflow  full/aggregated … 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)

KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC 100G Bruno Hoeft / KIT Andreas Petzold / KIT

22Steinbuch Centre for Computing 20 fileservers with 10GE NIC fileservers the storage was infiniband fileservers the storage was locally attached (6*SATA-600HDD (RAID0)) sc5 one 40 Gbps NIC + 2*LSI RAID(0) with 8*Intel 200GB SSD disks each echo3 echostream  high-performance hosts 2*40Gbps NIC + 48 SSD connected to 3 RAID(0) controller sc3 one 40 Gbps NIC + 2*LSI RAID(0) with 8* OCZ Vertex4 256GB SSD disks each SC13 (Denver) LHC data transfer : KIT to Caltech SC13 100G / CHEP 2015, Bruno Hoeft / Andreas Petzold (KIT)