CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Grid simulation (AliEn) Eugen Mudnić Technical university Split -FESB.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
KIP TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD Ivan Kisel KIP,
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
STS track recognition by 3D track-following method Gennady Ososkov, A.Airiyan, A.Lebedev, S.Lebedev, E.Litvinenko Laboratory of Information Technologies.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
Status of Reconstruction in CBM
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
The Helmholtz Association Project „Large Scale Data Management and Analysis“ (LSDMA) Kilian Schwarz, GSI; Christopher Jung, KIT.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Computing in CBM Volker Friese GSI Darmstadt International Conference on Matter under High Densities 21 – 23 June 2016 Sikkim Manipal Institute of Technology,
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Computing Models status and perspectives
Bernd Panzer-Steindel, CERN/IT
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Competences & Plans at GSI
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
LHCb thinking on Regional Centres and Related activities (GRIDs)
CBM Computing Overview and Status V
Presentation transcript:

CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese

Key figures CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 2 Max. event rate (mbias Au+Au)10 7 /s Estimated raw data size per event100 kB Raw data rate to FLES1 TB/s Archival rate1 GB/s Run scenario3 months/year at 2/3 duty cycle Raw data volume per run year5 PB ESD data size per event100 kB to be evaluated FLES on-site doable Why? ALICE: 8 months, PANDA: 6 months ESD much smaller than RAW for ALICE

On-line farm (FLES) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 3 Data reduction by factor 1,000 required Complicated trigger patterns: require (partial) event reconstruction Estimated size of on-line farm: 60,000 cores

Off-line reconstruction „Conventional“ scenario: off-line reconstruction RAW -> EDS today‘s performance for event reconstruction: ≈ 10 s Typically several reconstruction runs per data run Target: 100 days per reconstruction run Requires 60,000 cores Could be executed between runs on the on-line farm CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 4 To be reduced significantly

Reconstruction: Offline = Online ? Partial reonstruction (STS, fast primaries) required on-line Performance for global tracking (MUCH) drastically improved (A. Lebedev); to be expected also for RICH+TRD Bottleneck: hit finder Complete on-line reconstruction not unthinkable Storage of EDS dispensable? Trade-off speed vs. accuracy / efficiency Calibration ? CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 5 pros cons

CBM computing ressource estimates for AR-Report CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 6 CBMPANDA Cores60,00066,000 On-line storage15 PB12 PB Tape archive11 PB/a12 PB/a Raw data of 2 years + EDS + analysis 2 copies of raw data + ε Target date: 2016 Ramping: Annual increase by 100 %, starting 2010 Gives CBM ressources for 2010: 940 cores 0.2 PB online storage 0.2 PB tape archive

Mass disc storage at GSI Lustre filesystem in production since 2008: –distributed storage –high bandwidth –fully scalable –dynamically upgradable –connected to long-term storage (tape archive) Current installation at GSI: –100 server, 1,600 disks –1.1 PB capacity (1.5 PB end of 2009) –120 Gb/s throughput (200 Gb/s end of 2009) –serves 3000 cores (4000 end of 2009) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 7 Largest non-military lustre installation worldwide

FAIR HPC System (vision) CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 8  connect CCs of „local“ universities (Frankfurt, Darmstadt, Mainz) with GSI through high-speed links (1 Tb/s)  serves as core (TIER0) for FAIR computing  lossless and performant access to experiment data stored at GSI; no replica needed  10 Gb/s connection to remote institutions using GRID technology Computing initiatives: GSI/FAIR HPC, CSC-Scout, HIC4FAIR,.....

Open questions Integration of CBM needs into FAIR computing model –On-line farm close to experiment indispensable –Exclusive use at least during runtimes CBM-GRID –For user analysis, simulations –Data model (replicas etc.) –Context of FAIR-GRID Ressources for simulations not yet included –Paradigma: one simulated event for each real event? CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese 9