Lyon Analysis Facility - status & evolution - Renaud Vernet.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
HPC at IISER Pune Neet Deo System Administrator
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
Status of PDC’07 and user analysis issues (from admin point of view) L. Betev August 28, 2007.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
March, PROOF - Parallel ROOT Facility Maarten Ballintijn Bring the KB to the PB not the PB to the KB.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
PROOF Benchmark on Different Hardware Configurations 1 11/29/2007 Neng Xu, University of Wisconsin-Madison Mengmeng Chen, Annabelle Leung, Bruce Mellado,
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
PROOF on multi-core machines G. GANIS CERN / PH-SFT for the ROOT team Workshop on Parallelization and MultiCore technologies for LHC, CERN, April 2008.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Storage discovery in AliEn
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Alice Operations In France
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
WLCG IPv6 deployment strategy
Experience of PROOF cluster Installation and operation
Real Time Fake Analysis at PIC
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
Solid State Disks Testing with PROOF
NL Service Challenge Plans
Diskpool and cloud storage benchmarks used in IT-DSS
ALICE Monitoring
Moroccan Grid Infrastructure MaGrid
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Report PROOF session ALICE Offline FAIR Grid Workshop #1
Data Challenge with the Grid in ATLAS
Status of the CERN Analysis Facility
Multicore Computing in ATLAS
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
STORM & GPFS on Tier-2 Milan
Southwest Tier 2.
GSIAF "CAF" experience at GSI
Simulation use cases for T2 in ALICE
Partner: LMU (Atlas), GSI (Alice)
Grid Canada Testbed using HEP applications
ALICE-Grid Activities in Bologna
Support for ”interactive batch”
PROOF - Parallel ROOT Facility
TeraScale Supernova Initiative
Introduction to Operating Systems
DØ MC and Data Processing on the Grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

Lyon Analysis Facility - status & evolution - Renaud Vernet

01/10/ What is PROOF? Parallel ROOT Facility Designed to be an alternative to GRID for data analysis CPU power can be large + results are obtained quickly Code is submitted by the master and balances the load between the slaves (available CPUs)... maste r nodes cluster disks (2.5TB) PROOF (this is our current setup)... xrootd xrootd SE (17TB )

01/10/ PROOF and LAF Pros: ROOT = analysis software common to all HEP experiments (C++ based) Few requirements on OS and system Experiment-specific libraries are compiled on proof using PAR files (tar archives) ⇒ can be shared between several experiments/VOs Priorities can be defined Cons: TSelector Main contraint is on analysis code design : code must inherit from ROOT's TSelector Not fully “interactive” … but monitoring possible on each worker xrootd Can communicate with only 1 datafile manager : xrootd Our current setup all nodes = Intel(R) Xeon(R) CPU / 8 cores [2.66GHz] / 16GB RAM / SL4 1 master (ccapl0001) + 20 slaves (ccapl00[02- 21]) = 160 workers SE (xrootd) : 17TB Identification to PROOF based on certificate

01/10/ Live demo...

01/10/ A few results (with ALICE software) Projection X Saturation a 1800 events/s for typically 8-10 workers Data rate ~ 130 MB/s ~ bandwidth between data and workers → only limiting factor ? No other effect behind ? → answer will be given with a larger bandwith

01/10/ Situation for experiments : ALICE PROOF already part of the computing model Extensively tested and used for several years Analysis tasks belonging to AliRoot run on both GRID and PROOF Unique issue : Direct access to data on GRID and staging AliEn services must run on Solaris, which is not the case yet Work ongoing, hopefully completed soon Other experiments may face that problem too (?)

01/10/ Situation for experiments : ATLAS Contact person: M. CEA saclay Tests successful Library compilation OK Access to data OK Analysis code run on 500k events stored on the xrootdSE # cpus Processing time (s)Data rate (MB/s)

01/10/ Situation for experiments : CMS Contact persons: M. Bluj, A. LLR No real tests performed yet Not all CMS libraries can be loaded Probable mismatch between ROOT CMS flavour and bare ROOT used by PROOF … under investigation

01/10/ Evaluation of different storage possibilities... maste r worker s... Read data from “External” storage : dedicated xrootd SE “Internal” storage : hard disks of the cluster nodes Xrootd 1 Gb/s PROOF xrootd SE

01/10/ Performance benchmark (1 node only) Analysis on cluster disks : Worker reads data on another worker's disk Analysis on cluster disks : Worker reads data on its own disk Analysis on Xrootd storage : Worker reads data on SE → more performant getting data from external SE (ethernet) → if local disks are use to store data they must be changed

01/10/ Summary LAF is working... Successful tests from ATLAS, ALICE CMS feedback necessary … but can & must be improved bandwidth between SE and nodes interface with GRID services (data staging) Tests from several simultaneous users needed → stress the system ! results will help defining future improvements Possibility of using SSD disks for fast access (cache) can that be supported by PROOF/xrootd ? to be investigated No constraint on design, open to all good solutions