Presentation is loading. Please wait.

Presentation is loading. Please wait.

KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu.

Similar presentations


Presentation on theme: "KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu."— Presentation transcript:

1 KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu 100G Deployment@(DE-KIT) Bruno Hoeft / KIT Andreas Petzold / KIT

2 2Steinbuch Centre for Computing Agenda History 100G between KIT and Jülich (2010) 100G between KIT and University Dresden 100G between KIT and CERN SC13 – 100G from KIT to SC13 (Caltech-booth) 100G @ (DE-)KIT Motivation DE-KIT -- Deployment Steps Upgrade Sketch At Upgrade Involved Splitting into Parts 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

3 3Steinbuch Centre for Computing History Huawei 100GE DWDM … Düsseldorf Köln Bonn Koblenz Frankfurt Wiesbaden Mainz Darmstadt Mannheim FZ Jülich KIT Karlsruhe Dernbach Gernsheim Porz Naurod 88 km 60 km 96 km 64 km 29 km 93 km 17 km J– 01 10.2.33.11 J– 09 10.2.41.11 K– 01 10.2.21.11 K– 09 10.2.29.11 ms-fzj1 hades-fzj1 ms-kit1ti-kit1 … Huawei 100GE DWDM Heidelberg max. 96,5Gbps 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 440 km 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

4 4Steinbuch Centre for Computing History 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 2012 – Cisco NEXUS 7000 M2 Lincard EFT 2 * 100G between two VDCs 10 km Distance of the two VDCs (fiber drum) 42 10G host-interfaces to fill the 200G between the VDCs (memory to memory transfers (iperf) Full Internet routing table applicable 2013 – KIT – TU Dresden 97,63 G 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

5 5Steinbuch Centre for Computing History 2010 – KIT – Jülich (DFN/Huawei/Cisco/Gasline) 2013 – Cisco NEXUS 7000 M2 Lincard EFT 2013 – KIT – UNI Dresden 2013 – KIT - CERN 2013 - KIT – SC13 Caltech booth 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

6 6Steinbuch Centre for Computing KIT @ Caltech SC13 WAN Demo Storage to storage file transfer File transfer via FastDataTransfer (FDT) application @KIT 20 Fileserver different capacities 10G NIC @Caltech booth 3 Fileserver 1 or 2 40G NIC SSD Harddisk 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

7 7Steinbuch Centre for Computing WAN – KIT  SC13 Echostream3 40G sc3 sc5 40G Caltech SC13 booth DFN contributes 100G link Karlsruhe - Amsterdam DFN PoP Karlsruhe ECI 100G to DFN PoP Frankfurt @Frankfurt : peering to Géant Géant PoP Frankfurt to Géant PoP Amsterdam @Amsterdam Géant Peering via MX960 with SURFnet T4000 @KIT: Nexus 7010 : 100G linecard (LR-4) … ANA-100G (transatlantic link) @NewYork MANLAN (Internet2) and via Esnet to SC13 Denver 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

8 8Steinbuch Centre for Computing Echostream 3 40G sc3 sc5 40G Caltech SC13 booth WAN – KIT  CERN 100G 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

9 9Steinbuch Centre for Computing Echostream 3 40G sc3 sc5 40G Caltech SC13 booth WAN – KIT  CERN Monalisa 70Gbps at monalisa peaks up to 75Gbps Throughput: 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

10 10Steinbuch Centre for Computing Agenda History 100G between KIT and Jülich (2010) Testbed Topology Overview SC13 – 100G from KIT to SC13 (Caltech-booth) 100G @ (DE-)KIT Motivation DE-KIT -- Deployment Steps Upgrade Sketch At Upgrade Involved Splitting into Parts 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

11 11Steinbuch Centre for Computing DE-KIT / GridKa German LHC Tier-1 status CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) Poland (T2)FZU (Prag) (T2) NL-T1 (T1) GridKa (tier-1) Provider Edge Router DWDM DFN R-ir-cn-1-1 KIT Provider Edge Router Legende: 10G Ethernet 10G VPN End-to-End LHCOPN: T0 – T1 LHCONE: T1 – T[123] R-ir-cn-1-2 KIT Provider Edge Router 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

12 12Steinbuch Centre for Computing Proposed upgraded datarate after LS1 Expected 500MB/sec  approx. 5Gbps + additional experiment data  2 * 10GE LHCOPN Increased datarate between Tier-1 – Tier-1 communication Increased datarate between Tier-1 and Tier-[23] Upgrade Motivation LHCone utilization  15Gbps Status and Trends in Networking at LHC Tier1 Facilities (Andrey Bobyshev (FNAL) at CHEP 2012) 1*10 GE GridKa-->LHCOPN T010GE 3*10 GE Gridka-->DE-KIT to T115GE 2*10 GE GridKa--> LHCONE (T[123]) 15GE +Internetverbindung - KIT10GE 40GE General purpose internet traffic LHCOPN utilization  10Gbps LHCOPN link was saturated even during LS1 LHCone  max. usage 15Gbps DE-KIT to Tier-[123] G eneral P urpose I nternet  max. usage 10Gbps proposed upgrade 20GE 50GE 20GE 70+20 GE 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

13 13Steinbuch Centre for Computing Current GridKa deployment Direct link DE-KIT to Poznan (1G) to LHCONE (100G) CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) Poland (T2) FZU (Prag) (T2) NL-T1 (T1) GridKa (tier-1) Provider Edge Router DWDM DFN R-ir-cn-1-1 R-ir-cn-1-2 CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) NL-T1 (T1) Poland (T2)FZU (Prag) (T2) Direct link DE-KIT to IT-INFN-CNAF (10G) to LHCONE (100G) Direct link DE-KIT to FR-CCIN2P3 (10G) to LHCONE (100G) Direct link DE-KIT to NL-T1 (10G) to LHCONE (100G) Direct link DE-KIT to FZU (Prague) (10G) to LHCONE (100G) Migration to-LHCONE Direct link DE-KIT to CERN (2*10G) LHCOPN 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

14 14Steinbuch Centre for Computing GridKa deployment after KIT 100G upgrade Second deployment phase (2015): 1* 100G to Frankfurt 1* 100G to Erlangen “Traffic shaping“ to 50Gpbs at each 100G link Border router of KIT receiving 100G uplinks Project will get WAN access through KIT provider edge router 2* 10G LHCOPN link will stay separate VPN DE-KIT -- CERN CERN (T0) FR-CCIN2P3 (T1) IT-INFN-CNAF (T1) NL-T1 (T1) Poland (T2)FZU (Prag) (T2) LHCONE 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

15 15Steinbuch Centre for Computing 100G @ KIT – upgrade perimeter Areas needed to be included in the 100G upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces DFN 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

16 16Steinbuch Centre for Computing DWDM 100G @ KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces DWDM upgrade 4* 100GE transponder Distance 10km by air / 28 to 35km by fiber KIT Campus - North KIT Campus - South 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

17 17Steinbuch Centre for Computing DWDM 100G @ KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces 4* 100GE transponder Firewall 22* 100GE interfaces 6* 100GE transponder 4 backbone routers (2* VSS) (KIT core router) 16* 100GE interfaces 4* 100GE transponder 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

18 18Steinbuch Centre for Computing DWDM 100G @ KIT – upgrade perimeter DFN Belwue DWDM Areas needed to be included in the 100G Upgrade: 2 border routers (one VSS) (provider edge) (2* KIT-CN) 6* 100GE interfaces 2 border routers (one VSS) (provider edge) (2* KIT-CS) 14* 100GE interfaces 4* 100GE transponder firewall 22* 100GE interfaces 6* 100GE transponder 4 backbone routers (2* VSS) (KIT core router) 16* 100GE interfaces 4* 100GE transponder Routers of different projects (per project): 5* 100GE interfaces 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

19 19Steinbuch Centre for Computing Splitting the 100G upgrade into affordable parts 2013 Project LSDF 100G link to BioQuant Heidelberg via Belwue 2014 Project GridKa 100G link to LHCONE via DFN 2015/6 KIT-CN – 2* provider edge router 2016 KIT-CS – 2* provider edge router DWDM ? Firewall 2016/7 KIT core router (4* router) DWDM ? 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

20 20Steinbuch Centre for Computing 100G-WAN@KIT Upgrade requires new border router  market survey Options CISCO ISP router (CRS-3/ASR9000) Catalyst 6800  too late – no 100G available yet Nexus N7K/7700  is Nexus WAN capable? Alcatel-Lucent / Arista / Brocade / Juniper / … Limitations of different router/switch platforms Single stream 92Gbps (history) 100G Interface bandwidth limitation  single stream 12.75Gbps Transceiver support Routing table >1M IPv4 and IPv6 entries Snmp support  read + write Sflow/netflow  full/aggregated … 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)

21 KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu 100G Deployment@(DE-KIT) Bruno Hoeft / KIT Andreas Petzold / KIT

22 22Steinbuch Centre for Computing Hosts @ KIT @KIT: 20 fileservers with 10GE NIC installed @8 fileservers the storage was infiniband attached @4 fileservers the storage was locally attached (6*SATA-600HDD (RAID0)) sc5 one 40 Gbps NIC + 2*LSI RAID(0) with 8*Intel 200GB SSD disks each echo3 echostream  high-performance hosts http://www.echostreams.com/flachesan2.html 2*40Gbps NIC + 48 SSD connected to 3 RAID(0) controller sc3 one 40 Gbps NIC + 2*LSI RAID(0) with 8* OCZ Vertex4 256GB SSD disks each Hosts @ SC13 (Denver) LHC data transfer : KIT to Caltech booth @ SC13 100G Deployment@(DE-KIT) / CHEP 2015, 2015-04-13 Bruno Hoeft / Andreas Petzold (KIT)


Download ppt "KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu."

Similar presentations


Ads by Google