Presentation is loading. Please wait.

Presentation is loading. Please wait.

Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009.

Similar presentations


Presentation on theme: "Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009."— Presentation transcript:

1 Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009

2 Christoph Grab, ETH 2 Global Grid Community The global GRID community …

3 Christoph Grab, ETH 3 Grids in Switzerland (Just Some) Concentrate on HEP… Concentrate on HEP…

4 Christoph Grab, ETH 4 WLCG - Hierarchy Model … Swiss Tier-2 @CSCS LHCb Tier-3ATLAS Tier -3CMS Tier-3

5 Status of the Swiss Tier-2 Regional Centre

6 Christoph Grab, ETH 6 Swiss Tier-2: Facts and Figures (1) The Swiss Tier-2 is operated by a collaboration of CHIPP and CSCS (Swiss Centre of Scientific Computing of ETHZ), located in Manno (TI). Properties : One Tier-2 for all three expts. CMS, ATLAS + LHCb; provides:  Simulation for experiment’s community (supply WLCG pledges)  End-user analysis for Swiss community  support (operation and data supply) for Swiss Tier-3 centres Standard LINUX compute cluster “PHOENIX” (similar to other Tier-2s) Hardware setup increased incrementally in phases : Technology choice so far: SUN blade centres + quad Opterons. Final size to be reached by ~ 2009/2010: NO tapes

7 Christoph Grab, ETH 7 Swiss Tier-2 : Cluster Evolution Growth corresponds to Swiss commitment in terms of compute resources supplied to the expt’s according to the signed MoU with WLCG.  In operation now: 960 cores  1600 kSI2k; total 520 TB storage  Last phase planned for Q4/09:  2500 kSI2k; ~ 1 PB storage Phase A Phase B Operational Phase C Planned Q4/09 Phase- 0 CG_0209

8 Christoph Grab May, 2008 8 Swiss LHC Tier-2 cluster “PHOENIX” System Phase B operational since Nov 2008  CPUs: total of 1600 SI2K SUN SB8000P blade centres; AMD Opterons 2.6 GHz CPU (quad)  Storage: 27 X4500 - systems net capacity of 510 TB

9 Christoph Grab, ETH 9 Incremental CPU-usage over last 4 years  4.5 10 6 hours = 512 CPU-years Swiss Tier-2 usage (6/05-2/09) Tier-2 is up and has been in stable operation ~4 years ! continuous contributions of resources to experiments. Our Tier-2 size is in line with other Tier-2 (e.g. London T2) Phase A CMS ATLAS Phase B LHCb CSCS-LCG2

10 Christoph Grab, ETH 10 Normalised CPU time per month (3/07-2/09) Normalised CPU time per month (3/07-2/09) Shares between VOs varies over time (production, challenges…) Spare cycles given to other VO (eg. H1, theory (CERN)... ) CMS ATLAS LHCb H1 others CSCS-LCG2

11 Christoph Grab, ETH 11 Shares of normalised CPU per VO (3/05-2/09) Shares between VOs overall reasonably balanced. Reliability CMS ATLAS LHCb H1 others CSCS-LCG2

12 Christoph Grab, ETH 12 Swiss Tier-2: Facts and Figures (2) Manpower for Tier-2: Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) support of experiment specifics by scientists of experiments; one contact person per experiment  in total ~2 FTE. Financing : Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; Operations and infrastructure provided by CSCS (ETHZ), additional support physicists provided by institutes. Network traffic: routing via SWITCH : two redundant lines to CERN and Europe transfer rates reached up to 10 TB /day from FZK (and CERN) FZK (Karlsruhe) is the associated Tier-1 for Switzerland.

13 Christoph Grab, ETH 13 Financing : Hardware and service, no manpower : R&D financed by institutes in 2004 Total financial contributions (2004-2008) for incremental setup of Tier-2 cluster hardware only:  by Universities + ETH + EPFL + PSI  ~200 kCHF  by Federal funding (FORCE/SNF)  2.4 MCHF Planned investments:  in 09 : last phase ~ 1.3 MCHF.  >=2010 onwards: rolling replacements ~ 700 kCHF/year  Total investment up to Q1/2009 of ~ 2.6 MCHF; annual recurring costs expected (>2009) ~ 700 kCHF Swiss Tier-2: Facts and Figures (3)

14 Christoph Grab, ETH 14 Swiss Tier-2: Facts and Figures (2) Manpower for Tier-2: Operation at CSCS: ~ 2.5 FTEs (IT experts, about 5 persons) support of experiment specifics by scientists of experiments; one contact person per experiment  in total ~2 FTE. Financing (HW and service, no manpower) : Financing of hardware mainly through SNF/FORCE (~90%), with some contributions by Universities + ETH + PSI; Operations and infrastructure provided by CSCS (ETHZ) Network traffic: routing via SWITCH : two redundant lines from CSCS to CERN and Europe (SWITCH = Swiss academic network provider) FZK (Karlsruhe) is the associated Tier-1 for Switzerland. transfer rates reached up to 10 TB /day from FZK (and CERN)

15 Christoph Grab, ETH 15 Swiss Network Topology SWITCHlan Backbone: Dark Fiber Topology October 2008 10 Gbps T0 at CERN T2 at CSCS T1 at FZK

16 Christoph Grab, ETH 16 Status of the Swiss Tier-3 Centres

17 Christoph Grab, ETH 17 Swiss Tier-3 Efforts ATLAS : operates the Swiss ATLAS Grid  federation of clusters at  Bern uses local HEP cluster + shares university resources  Geneva operates local cluster+T2 CMS : ETHZ + PSI+ UZH run a combined Tier-3  located at and operated by PSI IT LHCb :  EPFL : operates new large local cluster  UZH uses local HEP + shares university resources Large progress seen over last year for all 3 experiments. Close national collaboration between Tiers:  Tier-3 contacts are ALSO experiment’s site contacts for CH Tier-2.  close contacts to Tier-1 at FZK.

18 Christoph Grab, ETH 18 Swiss Network and Tiers Landscape SWITCHlan Backbone: Dark Fiber Topology October 2008 10 Gbps T0 at CERN T2 at CSCS T1 at FZK CMS Tier-3 ATLAS Tier-3 Ge ATLAS Tier-3 Be LHCb Tier-3 EPFL LHCb Tier-3 UZH

19 Christoph Grab, ETH 19 Swiss Network and Tiers Landscape SWITCHlan Backbone: Dark Fiber Topology October 2008 10 Gbps T0 at CERN T2 at CSCS T1 at FZK CMS Tier-3 ATLAS Tier-3 Ge ATLAS Tier-3 Be LHCb Tier-3 EPFL LHCb Tier-3 UZH

20 Christoph Grab, ETH 20 Summary: Swiss Tier-3 Efforts (Q1/09) Nr cores CPU (kSI2k) Storage (TB) Comments ATLAS BE GE 30+300s 188 ~600 320 30 44 BE: GRID usage since 2005 for Atlas production GE: identical SW-environment to CERN; direct line to CERN. CMS ETHZ, PSI, UZH 72~250105 Operates GRID storage element and user interface to enable direct GRID access. LHCb EPFL UZH 464 shared ~800 125 36 15 EPFL is a DIRAC pilot site identical machines as in pit UZH:MC production; shared Total Tier-3 ~2100230 cf: Tier-2: 1600 kSI2k, 520 TB Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis. ( … more details in backup slides) Note: CPU numbers are estimates; upgrades in progress … Status 1.3.09

21 Christoph Grab, ETH 21 Swiss non-LHC GRID Efforts (1) Physics community:  Theory : some use idle Tier-2 / Tier-3 resources (CERN,…)  HEP Neutrino community: own clusters, or CERN lxplus…  PSI : synergies of Tier-3 know-how for others: ESRFUP project (collab. with ESRF, DESY and SOLEIL for GRID @synchrotrons)  Others use their own resources (smaller clusters..) Several other Grid projects exist in the Swiss academic sector: EU Projects: EGEE-II, KnowARC, DILIGENT, CoreGrid, GridChem... International Projects: WLCG, NorduGRID, SEPAC, PRAGMA, … National Projects: Swiss Bio Grid, G3@UZH, AAA/Switch.. Various Local Grid activities (infrastructure, development…): Condor CampusGrids, local University clusters …

22 Christoph Grab, ETH 22 Swiss non-LHC GRID Efforts (2) SWING: Swiss national GRID association: by ETHZ, EPFL, Cantonal Univ., Univ. of Applied Sciences, and by CSCS, SWITCH, PSI,... Provide a platform for interdisciplinary collaboration to leverage the Swiss Grid activities Represent the interests of the national Grid community towards other national and international bodies  aims to become NGI to EGI Activities organised in working groups:  HEP provides strong input, e.g.  ATLAS GRID is organised in SWING working group  strong involvement of CSCS and SWITCH see www.swiss-grid.org

23 Christoph Grab, ETH 23 Summary – Swiss Activities Common operation of ONE single Swiss Tier-2 Reliably operates and delivers the Swiss pledges to the LHC experiments in terms of computing resources since Q2/2005 Growth in size as planned, final size reached ~ end 2009. Compares well in size with other Tier-2s. Tier-3 centres strongly complement Tier-2 : operate in close collaboration – profit from know-how transfer. overall size of all Tier-3 is about 100% CPU / 50 % disk of Tier-2 HEP is (still) majoriy community in GRID activity in CH. We are prepared for PHYSICS ! HWW 2  2

24 Christoph Grab, ETH 24 S.Gadomski, A.Clark (UNI Ge) S.Haug, H.P. Beck (UNI Bern) C.Grab (ETHZ) [ chair CCB ] D.Feichtinger (PSI) Z.Chen, U.Langenegger (ETHZ) R.Bernet (UNIZH) P.Szczypka, J. Van Hunen (EPFL) Thanks to Tier-2 /-3 Personnel P.Kunszt (CSCS) F. Georgatos, J.Temple, S.Maffioletti, R.Murri (CSCS) and many more …

25 Christoph Grab, ETH 25 Optional slides

26 Christoph Grab, ETH 26 A.Clark, S.Gadomski (UNI Ge) H.P.Beck, S.Haug (UNI Bern) C.Grab (ETHZ) chair CCB D.Feichtinger (PSI) vice-chair CCB U. Langenegger (ETHZ) R.Bernet (UNIZH) J. Van Hunen (EPFL) P. Kunszt (CSCS) CHIPP Computing Board Coordinates the Tier-2 activities representatives of all institutions and experiments

27 Christoph Grab, ETH 27 The Swiss ATLAS Grid Swiss ATLAS GRID federation is based on: CSCS (T2) to T3s in Bern and Geneva For Total resources in 2009 ~800 cores and ~400 TB

28 Christoph Grab, ETH 28 ATLAS Tier3 at U. Bern ATLAS Tier3 at U. Bern Hardware in production In local cluster 11 servers – ~30 worker CPU cores, ~30 TB disk storage In shared university cluster ~300 worker CPU cores (in 2009) Upgrade plans (Q4 2009) ~100 worker cores in local cluster Increased share on shared cluster Usage Grid site since 2005 (ATLAS production) Local resource for LHEP’s analyses and simulations S. Haug ~ 30 / 6% (CPU/disk) size of Tier-2

29 Christoph Grab, ETH 29 ATLAS Tier3 at U. Geneva Hardware in production 61 computers – 53 workers, 5 file servers, 3 service nodes 188 CPU cores in the workers 44 TB of disk storage Upgrade plans grid Storage Element with 105 TB (Q1 2009) Advantages of the Geneva Tier3 environment like at CERN, latest ATLAS software via AFS direct line to CERN (1 Gbps) popular with ATLAS physicists (~60 users) S. Gadomski ~ 20/9% size of Tier-2

30 Christoph Grab, ETH 30 CMS : common Tier 3 at PSI Common CMS Tier-3 for ETH, PSI, UZH groups in operation at PSI since Q4/2008 1 Gbps connection PSI  ZH Upgrade in 2009 by factor 2 planned. Year20082009 CPU / kSI2k215500 Disk / TB105250 ~ 15/20% size of Tier-2 +2 more X4500 ~ 25 users now, growing operates GRID storage element and user interface to enable users direct GRID access. Local production jobs D.Feichtinger

31 Christoph Grab, ETH 31 LHCb: Tier-3 at EPFL Hardware and Software: Machines identical to those in the LHCb pit  58 Worker nodes x8 cores (~840 kSI2k)  36 Tb of storage Uses SLC4 binaries of LHCb software and DIRAC Development builds Current Status and Operation: EPFL is one of the pilot DIRAC Sites Custom DIRAC interface for batch access Active development to streamline GRID usage Aim to run official LHCb MC production P. Szczypka ~ 50 / 7 % size of Tier-2

32 Christoph Grab, ETH 32 Hardware in production Zurich local HEP Cluster:  small Intel Cluster for local LHCb jobs  Hardware: CPU: 125 kSI2k, Disk: ~15 TB Shared Zurich Matterhorn Cluster at IT-dept. of UZH:  Only used for Monte Carlo Production (~ 25 kSI2k)  Replacement in Q3/2009 in progress Usage: local analysis resource Monte Carlo Production for LHCb LHCb : Tier-3 at U. Zurich ~ 10 / 5 % size of Tier-2 by R. Bernet

33 Christoph Grab, ETH 33 Summary: Swiss Tier-3 Efforts (Q1/09) Nr cores CPU (kSI2k) Storage (TB) Comments ATLAS BE GE 30+300s 188 ~600 320 30 44 BE: GRID usage since 2005 for Atlas production GE: identical SW-environment to CERN; direct line to CERN. CMS ETHZ, PSI, UZH 72~250105 Operates GRID storage element and user interface to enable direct GRID access. LHCb EPFL UZH 464 shared ~800 125 36 15 EPFL is a DIRAC pilot site identical machines as in pit UZH:MC production; shared Total Tier-3 ~2100230 cf: Tier-2: 1600 kSI2k, 520 TB Tier-3 capacities : similar size in CPU as Tier-2 ; and ~ 50% disk Substantial investment of resources for MC production+local analysis. ( … more details in backup slides) Note: CPU numbers are estimates; upgrades in progress … Status 1.3.09

34 Christoph Grab, ETH 34 non-LHC GRID Efforts KnowARC: Grid-enabled Know-how Sharing Technology Based on ARC Services and Open Standards" (KnowARC) ; Next Generation Grid middleware based on NorduGrid's ARC CoreGRID : European Network of Excellence (NoE) aims at strengthening and advancing scientific and technological excellence in the area of Grid and Peer-to-Peer technologies. GridCHEM: " Computational Chemistry Grid" (CCG) is a virtual organization that provides access to high performance computing resources for computational chemistry (mainly US) DILIGENT: Digital Library Infrastructure on Grid Enabled Technology (6 th FP) SEPAC: Southern European Partnership for Advanced Computing grid project PRAGMA: Pacific Rim Application and Grid Middleware Assembly (PRAGMA)


Download ppt "Swiss GRID Activities Christoph Grab Lausanne, March 2009 Lausanne, March 2009."

Similar presentations


Ads by Google