December 3 2007Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.

Slides:



Advertisements
Similar presentations
LCG Introduction Kors Bos, NIKHEF, Amsterdam GDB Jan.10, 2007 IT Auditorium.
Advertisements

Ganga in the ATLAS Full Dress Rehearsal Birmingham 4th June 2008 Karl Harrison University of Birmingham - ATLAS Full Dress Rehearsal (FDR) involves loading.
US ATLAS Transparent Distributed Facility Workshop, UNC 3/4/ FDR - Results so far, schedule and challenges Jim Cochran Iowa State University What.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
AMOD Report Doug Benjamin Duke University. Hourly Jobs Running during last week 140 K Blue – MC simulation Yellow Data processing Red – user Analysis.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
CCRC’08 Common Computing Readiness Challenge
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Alexei Klimentov : ATLAS Computing CHEP March Prague Reprocessing LHC beam and cosmic ray data with the ATLAS distributed Production System.
WLCG Service Schedule June 2007.
WLCG Service Report ~~~ WLCG Management Board, 24 th November
CCRC08-1 report WLCG Workshop, April KorsBos, ATLAS/NIKHEF/CERN.
USATLAS SC4. 2 ?! …… The same host name for dual NIC dCache door is resolved to different IP addresses depending.
USATLAS SC4. 2 ?! …… The same host name for dual NIC dCache door is resolved to different IP addresses depending.
ATLAS: Heavier than Heaven? Roger Jones Lancaster University GridPP19 Ambleside 28 August 2007.
10/03/2008A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 10/03/08.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
SC4 Planning Planning for the Initial LCG Service September 2005.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
LCG Introduction Kors Bos, NIKHEF, Amsterdam GDB Oct.4, 2006.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Initial Planning towards The Full Dress Rehearsal Michael Ernst.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
CERN IT Department CH-1211 Genève 23 Switzerland t L'infrastructure de calcul pour le LHC Le point de vue d'ATLAS Simone Campana CERN IT/GS.
ATLAS Distributed Computing 1 Kors Bos Annecy, le 18 Mai 2009.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Computing Operations Roadmap
LCG Service Challenge: Planning and Milestones
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
ATLAS activities in the IT cloud in April 2008
Main next computing activities
Status and Prospects of The LHC Experiments Computing
CMS transferts massif Artem Trunov.
The Data Lifetime model
Simone Campana IT/GS/EIS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
CMS staging from tape Natalia Ratnikova, Fermilab
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
R. Graciani for LHCb Mumbay, Feb 2006
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
The LHCb Computing Data Challenge DC06
Presentation transcript:

December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam

ATLAS tests Throughput Tests –Test T0-T1 throughput with generated data Cosmic Ray tests M4, M5, M6, M7(?) –Primarily (sub)detector integration, 1 st week –But also data collection, 2 nd week Full Dress Rehearsal FDR0, FDR1, FDR2 –Primarily a trigger & streaming test –But also data collection and distribution Functional Tests FT –Dq2 functionality test 2

Full Dress Rehearsals Simulate 1 day’s worth of real data Inject into the tdaq and run from SFO onwards Run the T0 as complete as possible –Merging, tape writing, calibration, processing, etc Run the CM as complete as possible –Data distribution, re-processing, analysis, etc Run full MC production simultaneously –T2-T1 data upload, reconstruction in T1, etc. FDR-1 in February and FDR-2 in April/May 3

Byte Stream Data for FDR-1 Use all RDO files from release 12 –~150 TByte Currently copying those files to CERN Merge them at CERN into right mix: mixing –1 day of data is ~20 TByte Produce Byte Stream output files –Mixing may take several weeks Inject those into the SFOs Datasets per Stream are registered by T0 4

FDR-2 Same as FDR-1 but at higher Luminosity RDO’s need to be produced with rel.13 MC –Takes ~10 weeks, ~150 TB again –Mixing takes again ~4 weeks, output ~20 TB –Validation rlse 13 MC foreseen end January FDR simulates 1 day of data taking But can repeated over and over again SFO’s can only be reserved for 1 week max 5

6 DecemberJanuaryFebruaryMarchApril May CC Readiness Challenge 12 Full DressRehersal 12 0 Cosmic Ray tests 67 SRMv2 testing 1 bis Functional Tests 4356 Planning Time for fixes and updates 213 S&C Week Jamboree 2 (?) WLCG Workshop

M5 data instead It may be too optimistic to have FDR Feb. 1st We can use M5 data in stead for CCRC We will select 20 TB worth 1 day’s data taking Rename and use it again every day M5 data is less interesting –Only straight tracks, no physics –Calibration not meaningful, even impossible –(re-) processing is trivial, no AOD’s, but possible –Analysis is trivial also, but possible 7

Block 1: T0 Data collection in CASTOR Archive to TAPE Calibration processing *) merging Processing  ESD, AOD *), DPD *) Subscriptions *) only when we can use FDR data 8

Block 1 T0 storage req.s FDR data: 20 TB, M5 data: 20 TB Calibration pool *) Merge pool: 50 TB 5 day export buffer: 100 TB CAF: 300 TB Analysis pool More accurate numbers soon 9

Block 2: T1’s Store RAW (share) on TAPE Store ESD (share) on DISK Store full AOD & DPD copy on DISK Export AOD &DPD to Tier-2’s on request Re-processing has to worked out in detail Production processing has to be worked out in detail More accurate numbers will follow 10

Block 2 T1 Storage req.s Want to use real (not test) endpoints ~20 TB/day  ~600 TB/month Roughly 50% Tape and 50% Disk ~300 TB/month for 10 Tier-1’s So a 10% Tier-1 should have ~30 TB disk Data can be removed shortly after Feb. Will provide more precise numbers This accounts for primary RAW data only Re-processing & production will follow 11

Oct 2007 shares Share(%)Block1 Data BNL2575 IN2P31030 SARA1545 RAL1545 FZK1030 CNAF515 ASGC515 PIC515 NDGF515 TRIUMF515 Total

Block 3: T2’s MC Simulation HITS merging and upload to T1 Download AODs from T1 Analysis to worked out in detail More numbers will follow 13

Storage classes To be presented by Simone 14