Simulation use cases for T2 in ALICE

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
ALICE Operations short summary and directions in 2012 Grid Deployment Board March 21, 2011.
ALICE Operations short summary LHCC Referees meeting June 12, 2012.
ALICE Operations short summary and directions in 2012 WLCG workshop May 19-20, 2012.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Production Activities and Requirements by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN,
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
What is expected from ALICE during CCRC’08 in February.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
PDC’06 – production status and issues Latchezar Betev TF meeting – May 04, 2006.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
AliEn central services Costin Grigoras. Hardware overview  27 machines  Mix of SLC4, SLC5, Ubuntu 8.04, 8.10, 9.04  100 cores  20 KVA UPSs  2 * 1Gbps.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ALICE experiences with CASTOR2 Latchezar Betev ALICE.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
PDC’06 - status of deployment and production Latchezar Betev TF meeting – April 27, 2006.
ALICE Grid operations +some specific for T2s US-ALICE Grid operations review 7 March 2014 Latchezar Betev 1.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
ALICE computing Focus on STEP09 and analysis activities ALICE computing Focus on STEP09 and analysis activities Latchezar Betev Réunion LCG-France, LAPP.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
WLCG IPv6 deployment strategy
ALICE and LCG Stefano Bagnasco I.N.F.N. Torino
Belle II Physics Analysis Center at TIFR
Data Challenge with the Grid in ATLAS
INFN-GRID Workshop Bari, October, 26, 2004
Brief overview on GridICE and Ticketing System
ALICE Physics Data Challenge 3
Update on Plan for KISTI-GSDC
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
STORM & GPFS on Tier-2 Milan
AliRoot status and PDC’04
MC data production, reconstruction and analysis - lessons from PDC’04
Artem Trunov and EKP team EPK – Uni Karlsruhe
ALICE – FAIR Offline Meeting KVI (Groningen), 3-4 May 2010
ALICE Computing Model in Run3
R. Graciani for LHCb Mumbay, Feb 2006
How To Integrate an Application on Grid
MonteCarlo production for the BaBar experiment on the Italian grid
ATLAS DC2 & Continuous production
Exploring Multi-Core on
The LHCb Computing Data Challenge DC06
Presentation transcript:

Simulation use cases for T2 in ALICE Latchezar Betev T2 Workshop CERN – June 12, 2006

ALICE T2 simulation use cases Outline The ALICE computing models and tiered structure MC production – specifics T2 setup – internal structure and data transfers Testing of MC production ALICE T2 simulation use cases

The ALICE computing model Minimum of intrinsic hierarchy and specific resources categories: Intelligent workload scheduling allowing for an optimal resources utilization With the exception of critical tasks, the tiered computing centres architecture is not strictly enforced in ALICE The difference between T1 and T2 is: Absence of a MSS in a T2 – custodial storage is ’remote’ from a T2 point of view Lower QoS – typically service 24/7 is not requested by the MoU, however the T2s are typically showing excellent support ALICE T2 simulation use cases

The ALICE computing model (2) Distribution of tasks per tiers in the ALICE computing model T2s are responsible for MC simulation and chaotic data analysis ALICE T2 simulation use cases

ALICE T2 simulation use cases MC production on T2s There is approximately as many MC events in ALICE as data events Thus the role of the T2s is rather important The amount of computing resources (CPU, disk storage) provided by T2s is > 50% of the total We have currently ~35 T2 sites providing 2000KSi2K CPU units and ~350 TB of disk space In 2007-2008, the number of sites is going to be about the same, however the CPU and storage is going to grow x2 Production of MC events MC production job is like any other job in the ALICE model All jobs (including MC production) are submitted to the central ALICE GRID task queue – no backdoor access to resources This allows to prioritize the submission to the sites Since the MC production does not require a large amount of input data, the matchmaking between the jobs and the site resources is trivial ALICE T2 simulation use cases

ALICE T2 simulation use cases MC production on T2s (2) Typical jdl of a MC job in ALICE ALICE T2 simulation use cases

ALICE T2 simulation use cases MC production on T2s (3) The application software is distributed though the AliEn tools (PackMan) For ‘official’ production User production PackMan also tracks the software versions and dependencies The application software includes Event genarators (HIJING, PYTHIA,…) Transport code (GEANT3, FLUKA) Reconstruction code (AliRoot) MC events are stored on the local SE and transferred to the hosting T1 for custodial storage Analysis of these events (at the T2) from all members of the collaboration is assured through the GRID access ALICE T2 simulation use cases

ALICE T2 simulation use cases MC production on T2s This model of production has been (and is) extensively tested in 3 data challenges: PDC’04, PDC’05 and the ongoing PDC’06 Standard setup – LCG/gLite with an ALICE VO-box (as on the T1s) We do require a VO-box at every site supporting ALICE ALICE-specific services and their function is well-documented All job submission to T2s is through the GRID Specific requirements for the T2s (MC induced, but usable also for other tasks) Large amount of memory consumption per job: 2GB max => 2GB RAM per CPU(core) Job duration – typically 8 KSI2k hours (generation, transport, reconstruction) Input data – small set of configuration files + conditions Output data – up to 1.5GB/job, standard 300 MB, this amount of scratch disk space is required on the WN The jobs are (naturally) CPU-intensive, no stringent requirement on storage One ALICE specific queue at the site – all prioritization is handled in the central AliEn task queue prior to job submission Produced MC data are stored on the local SE and transferred for custodial storage to the host T1 ALICE T2 simulation use cases

ALICE T2 simulation use cases T2 support Regional principle – T1 expert is responsible for the country/region T2s Setup of ALICE specific services Training of T2 personnel Site specific problem tracking and solving – through direct contact with administrators and through GGUS ticketing system The latter is getting better with time ALICE T2 simulation use cases

PDC’06 specific tests of T2s Generally, ALICE is currently doing tests of all elements of the computing model MC production – ongoing on all configured sites Local storage of data – on xrootd enabled disk servers (see presentation of A.Trunov at this workshop) In August 2007 – T2->T1 transfer (FTS) tests: Relational matrix T2-> host T1 is being build Installation of FTS client infrastructure User analysis on T2s is going to be discussed in another presentation (see talk of A. Peters tomorrow) ALICE T2 simulation use cases

ALICE T2 simulation use cases Conclusions T2s will be responsible for MC production and chaotic user analysis They are integrated in ALICE through the GRID tools for job submission/storage and data migration In the role of MC producers, the T2s have been extensively used in a series of data challenges Few remaining tools for data replication to be tested and integrated this year The MC production is equivalent (as is treated) as the other computational tasks (data production, analysis) in the ALICE model and as such does not induce any special requirements ALICE T2 simulation use cases