Presentation is loading. Please wait.

Presentation is loading. Please wait.

CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.

Similar presentations


Presentation on theme: "CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second."— Presentation transcript:

1 CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second

2

3 ATLAS distributed data management software, Don Quijote 2 (DQ2)

4 ATLAS full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. 'Tier-0 exercise' of Distributed Data Management project of ATLAS starting June 2007 6th August 2007: first PetaByte of simulated data copied to Tier-1’s worldwide (*) ASGC in Taiwan, BNL in the USA, CNAF in Italy, FZK in Germany, CC2IN2P3 in France, NDGF in Scandinavia, PIC in Spain, RAL in the UK, SARA in the Netherlands and TRIUMF in Canada.

5 Computing Model: central operations Tier-0 : - Copy RAW data to Castor tape for archival - Copy RAW data to Tier-1s for storage and reprocessing - Run first-pass calibration/alignment (within 24 hrs) - Run first-pass reconstruction (within 48 hrs) - Distribute reconstruction output (ESDs, AODs & TAGS) to Tier-1s - Keep current versions of ESDs and AODs on disk for analysis Tier-1s: - Store and take care of a fraction of RAW data - Run “slow” calibration/alignment procedures - Rerun reconstruction with better calib/align and/or algorithms - Distribute reconstruction output to Tier-2s Tier-2s: - Run simulation - Run calibration/alignment procedures - Keep current versions of AODs on disk for analysis - Run user analysis jobs

6 Dario Barberis: ATLAS Activities at Tier-2s Tier-2 Workshop - 12-14 June 2006 Computing Model and Resources l The ATLAS Computing Model is still the same as in the Computing TDR (June 2005) and basically the same as in the Computing Model document (Dec. 2004) submitted for the LHCC review in January 2005 l The sum of 30-35 Tier-2s will provide ~40% of the total ATLAS computing and disk storage capacity nCPUs for full simulation productions and user analysis jobs  On average 1:2 for central simulation and analysis jobs nDisk for AODs, samples of ESDs and RAW data, and most importantly for selected event samples for physics analysis l We do not ask Tier-2s to run any particular service for ATLAS in addition to providing the Grid infrastructure (CE, SE, etc.) nAll data management services (catalogues and transfers) are run from Tier-1s l Some “larger” Tier-2s may choose to run their own services, instead of depending on a Tier-1 nIn this case, they should contact us directly l Depending on local expertise, some Tier-2s will specialise in one particular task nSuch as calibrating a very complex detector that needs special access to particular datasets

7 Dario Barberis: ATLAS Activities at Tier-2s Tier-2 Workshop - 12-14 June 2006 ATLAS Analysis Work Model 1. Job preparation: 2. Medium-scale testing: 3. Large-scale running: Local system (shell) Prepare JobOptions  Run Athena (interactive or batch)  Get Output Local system (Ganga) Job book-keeping Get Output Local system (Ganga) Prepare JobOptions Find dataset from DDM Generate & submit jobs Grid Run Athena Local system (Ganga) Job book-keeping Access output from Grid Merge results Local system (Ganga) Prepare JobOptions Find dataset from DDM Generate & submit jobs ProdSys Run Athena on Grid Store o/p on Grid Analysis jobs must run where the input data files are As transferring data files from other sites may take longer than actually running the job

8 … but who contributes what? C-MoU !

9 Annex 6.4 Ressources pledged:

10 Annex 3.3. Tier-2 Services …. The following services shall be provided by each of the Tier2 Centers in respect of the LHC Experiments that they serve … i. provision of managed disk storage providing permanent and/or temporary data storage for files and databases; ii. provision of access to the stored data by other centers of the WLCG and by named AF’s as defined in paragraph 1.4 of this MoU; iii. operation of an end-user analysis facility; iv. provision of other services, e.g. simulation, according to agreed Experiment requirements;

11 v. ensure network bandwidth and services for data exchange with Tier1 Centres, as part of an overall plan agreed between the Experiments and the Tier1 Centres concerned. All storage and computational services shall be “grid enabled” according to standards agreed between the LHC Experiments and the regional centres. The following parameters define the minimum levels of service. They will be reviewed by the operational boards of the WLCG Collaboration.

12 A USTRIAN G RID Grid Computing Infrastruktur Initiative für Österreich Business Plan (Phase 2) Jens Volkert, Bruno Buchberger (Universität Linz) Dietmar Kuhn (Universität Innsbruck) März 2007

13 Austrian Grid II = Supported Project: 5.4 M€ Contribution by groups from other sources: 5.1 " Total 10.5 “ Structure: Research Center Development Center Service Center + 19 Work Packages 1 Administration 10 Basic Research 8 Integrated Applications

14 PAK und Vertreter PMB: D. Kranzlmüller (VR G. Kotsis), W.Schreiner,Th. Fahringer

15 C-MoU still not yet singed by Austria, but light at the end of the tunnel: Proposal for national federated Tier-2 (ATLAS+CMS) in Vienna Accepted 2008 Austrian Grid Phase II (2007 – 2009) Launching project! Expected to be sustainable after 2010

16 Personnel: 70 my, 15 for SC, 4.5 for fT-2 5 FTE for Service Center, 1.5 for federated Tier-2 (Ibk) Vienna is expected to use presently vacant available positions (estimate 1,5 FTE, too) 34 k€/FTE/yr. (51k€/y for for Tier-2) i.e. Hardware1.053 k€ Personnel 153 „ Total 1.206 k€ Formalities: Fördervertrag: signed Jan: 08 in Ministry Konsortialvertrag: to be signed March 6 ? C- MoU: to be signed soon …

17

18 This graph shows a snapshot of data throughput at peak operation to the Tier 1 centres. Each bar represents the average throughput over 10 minutes and the colours represent the 10 Tier 1 centres. We can see the average throughput is fairly stable at around 600 MB/s, that is the equivalent of around 1 CD of data being shipped out of CERN every second. The current rates we observe on average are the equivalent of around 1 PetaByte per month, close to the final data rates needed for data taking.


Download ppt "CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second."

Similar presentations


Ads by Google