Presentation is loading. Please wait.

Presentation is loading. Please wait.

Main title HEP in Greece Group info (if required) Your name ….

Similar presentations


Presentation on theme: "Main title HEP in Greece Group info (if required) Your name …."— Presentation transcript:

1 Main title GRID @ HEP in Greece Group info (if required) Your name ….

2 LHC HEP groups in Greece AUTH - ATLAS – ????? Demokritos - CMS – 6 Senior Physicists, 2 post-docs, 3 Ph.D. Students NTUA - ATLAS – ????? UoA - ATLAS, CMS, ALICE – ??????? UoI - CMS – ???????

3 Current HEP Grid Resources AUTH 2 Grid Clusters (GR-01-AUTH, HG-03-AUTH) 300 cores 70TB storage more than 30% allocated for the LHC/HEP VOs Prof. Chariclia PETRIDOU The HG-03-AUTH cluster is a part of the HellasGrid infrastructure, owned by GRNET. Demokritos 1 Grid Cluster (GR-05-Demokritos) 120 cores 25TB storage dedicated to HEP VOs Dr. Christos MARKOU NTUA 1 Grid Cluster (GR-03-HEPNTUA) 74 cores 4.5 TB storage Dedicated to HEP VOs Prof. Evagelos GAZIS UoA - IASA 2 Grid Clusters (GR-06-IASA, HG-02-IASA) 140 cores (additional 160 cores in the next days) 10TB storage (additional 10TB in the next days) more than 30% allocated for the LHC/HEP VOs Prof. Paris SPHICAS The HG-02-IASA cluster is a part of the HellasGrid infrastructure, owned by GRNET. Uo Ioannina 1 Grid Cluster (GR-07-UOI-HEPLAB) 112 cores 200TB storage dedicated to the CMS experiment Prof. Kostas FOUDAS

4 Expertise in Grid technology (1) At least two teams of the consortium, IASA, AUTH and Demokritos: – Have strong cooperation with GRNET – They are major stakeholders of the Greek National Grid Initiative (NGI_GRNET/HellasGrid) – Operate Grid production-level sites since 2003 – Deep knowledge in day-to-day operations of the Grid distributed core services (including Virtual Organization Management Systems, Information Systems, Workload management services, Logical File Catalogs e.t.c.) – Experienced in testing and certification of grid services and middleware – Datacenter/Grid monitoring – Clustering, data management, network monitoring

5 Expertise in Grid technology (2) Participation in the pan-European projects – CROSSGRID, GRIDCC, EGEE-I, EGEE-II, EGEE-ΙΙΙ, EUChinaGrid, SEEGRID and since 6/2010 in EGI Additionally, the teams are responsible for running at a National and/or International level services: – HellasGrid Certification Authority [https://access.hellasgrid.gr/] – European Grid Application Database [http://appdb.egi.eu] – Unified Middleware Deployment global repository [http://repository.egi.eu]

6 Expertise in Grid management On behalf of GRNET, personnel of our teams carried-out regional and/or national level responsibilities/roles: – EGI task leader for Reliable Infrastructure Provision – EGI task leader for User Community Technical Services – Deputy Regional Coordinator for the operations in the South Eastern Europe region (period 4/2009-4/2010 - EGEEIII) – Country representative/coordinator for the HellasGrid (period 5/2009-4/2010 - EGEEIII) – Manager for the Policy, International Cooperation & Standardization Status Report Activity of EGEE-III – Coordinator of the Direct User Support group in EGEE-III

7 HEP data transfers (1) CMS phedex commissioned links for data transfers

8 HEP data transfers (2) ATLAS – DQ2 - Don Quijote (second release) – DQ2 is built on top of Grid data transfer tools

9 Installed HEP SW The CRAB-CMS Remote Analysis Builder it is installed on the UI that is hosted at IASA The GANGA frontend for job management on the UI of AUTH Almost all the sites of the consortium have the most up-to- date software installed

10 Some indicative statistics More than 580k jobs since 1/2005 More than 1.5M normalized cpu hours since 1/2005

11 Needs ….. What our HEP consortium needs in terms of Physics and Why … – [ChristosM]

12 Specs of a T2 site Based on CMS and Atlas specifications, the minimum requirements that should be covered for a T2 site, are: Computing5-6k HEPSpec2006 aprox. 500-600 cores (~70 nodes with dual Quad core) Storage> 250TB available disk space (aprox 300TB disks, with 1/6 redundancy - RAID)

13 Proposed distributed T2 site Distributed in two locations in order Take advantage of existing Infrastructure and Support Teams Scenario 1: Computing and Storage Infrastructure in both locations – One Tier 2 with two sub-clusters. – Higher Availability Redundancy in case of site failure Allow for flexible maintenance windows Scenario 2: Computing and Storage Infrastructure decoupled – One Tier2 with one sub-cluster – Split requirement on technical expertise Each Support team focuses on specific technical aspects (Storage - Computing) Reduced requirement for overlapped manpower & homogeneity effort

14 Things to be discussed … (this slide it only for internal discussion/decisions during the EVO meeting on 18/6) Offering a segment (i.e. 20%) of the resources for the SEE VO – benefit the academic/scientific communities of the SEE VO. Pledged resources will be provided/guaranteed by us (MoU maybe ???) For example: – Guaranteed 50k/year from the consortium, divided as: 50% for computing resources upgrade and/or expansion 40% for additional storage 10% maintenance of the infrastructure (A/C, UPS, Electrical infra, …)


Download ppt "Main title HEP in Greece Group info (if required) Your name …."

Similar presentations


Ads by Google