Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHCb Software Meeting 28.03.01 Glenn Patrick1 First Ideas on Distributed Analysis for LHCb LHCb Software Week CERN, 28th March 2001 Glenn Patrick (RAL)

Similar presentations


Presentation on theme: "LHCb Software Meeting 28.03.01 Glenn Patrick1 First Ideas on Distributed Analysis for LHCb LHCb Software Week CERN, 28th March 2001 Glenn Patrick (RAL)"— Presentation transcript:

1 LHCb Software Meeting 28.03.01 Glenn Patrick1 First Ideas on Distributed Analysis for LHCb LHCb Software Week CERN, 28th March 2001 Glenn Patrick (RAL) http://hepwww.rl.ac.uk/lhcb/physics/lhcbcern280301.ppt

2 LHCb Software Meeting 28.03.01 Glenn Patrick2 Analysis and the Grid? Monte-Carlo Production is readily mapped onto a Grid Architecture because: It is a well defined problem using the same executable. Already requires distributed resources (mainly cpu) in large centres (eg. Lyon, RAL, Liverpool...). Few people involved. Analysis is much more inventive/chaotic and will involve far more people in a wide range of institutes. How easily this is perceived to map onto the Grid depends on where we sit on the Hype Cycle....

3 LHCb Software Meeting 28.03.01 Glenn Patrick3 Hype Cycle of Emerging Technology Courtesy of Gartner Group Time Hype Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity Trigger

4 LHCb Software Meeting 28.03.01 Glenn Patrick4 Issues There are two basic issues: What is the data model for the experiment? Most work on this was done BG (before Grid). Is it still relevant? Do we move analysis jobs to the data or the data to the jobs? What is the minimum dataset required for analysis (AOD,ESD)? Are we accessing objects or files? Interactive versus batch computing. What services and interfaces have to be provided to grid-enable the LHCb analysis software? Difficult until a working Grid architecture emerges. Have to make a start and gradually evolve?

5 LHCb Software Meeting 28.03.01 Glenn Patrick5 Data Model Networking Evolution In UK, WorldCom is providing the national backbone for SuperJanet4 from March 2001. 2000SuperJanet3155 Mbit/s 1Q200116 x SuperJanet32.5 Gbit/s 4Q200164 x SuperJanet310 Gbit/s 2Q2002128 x SuperJanet320 Gbit/s Few years ago - Most bulk data was moved by tape. Now - Almost all data from RAL is moved over the network.  More scope for moving data to the application?

6 Glenn Patrick6 Scotland via Edinburgh Scotland via Glasgow WorldCom Glasgow WorldCom Edinburgh NorMAN YHMAN EMMAN EastNet External Links LMN Kentish MAN LeNSE SWAN & BWEMAN South Wales MAN TVN MidMAN Northern Ireland North Wales MAN NNW C&NL MAN WorldCom Warrington WorldCom Leeds WorldCom Reading WorldCom London WorldCom Reading WorldCom Portsmouth 155Mbit/s single fibre interface 622Mbit/s single fibre interface 2.5Gbit/s single fibre interface 2.5Gbit/s dual fibre interface 2.5Gbit/s development network SuperJanet4 UK Backbone, March 2001

7 LHCb Software Meeting 28.03.01 Glenn Patrick7 Data Model Last Mile Problem? Having a fast backbone is not much use if local bottlenecks exist (typically 100 Mbit/s). Need to do point-to-point tests using realistic datasets. ConnectionRateTape(750MB) RAL CSF  RAL PPD  1600kB/s8 minutes RAL CSF  CERN  360kB/s35 minutes RAL CSF  Liverpool~90kB/s2.3 hours Very crude tests done on a bad day. Need to perform spectrum of tests with realistic datasets, new tools, etc. Parallel Grid-FTP(multiple streams)  1MB/s RAL  CERN  But increasing data flow down the analysis chain...

8 8 ESD: Data or Monte Carlo Event Tags Event Selection Analysis Object Data AOD Analysis Object Data AOD Calibration Data Analysis, Skims Raw Data Tier 0,1 Collaboration wide Tier 2 Analysis Groups Tier 3, 4 Physicists Physics Analysis Physics Objects Physics Objects Physics Objects INCREASING DATA FLOWINCREASING DATA FLOW Ref: Tony Doyle(WP2/ATLAS)

9 LHCb Software Meeting 28.03.01 Glenn Patrick9 AODGroup Analysis Tags Physics Analysis Private Data (e.g. ntuple) Analysis Workstation Physics results Analysis Cycle (for each physicist) Which Datasets are really needed for Analysis? For event with “interesting” Group Analysis Tags Calibration Data Few physicists and for very few events Raw Data ESD Some physicists for small sample of events Generator Data For Monte Carlo events Likely to be different requirements at startup.

10 LHCb Software Meeting 28.03.01 Glenn Patrick10 Datasets 2007 - Hoffman ALICE(pp)ATLASCMSLHCb RAW per event1MB1MB1MB0.125MB ESD per event0.1MB0.5MB0.5MB0.1MB AOD per event10kB10kB10kB20kB TAG per event1kB0.1kB1kB1kB Real Data Storage1.2PB2PB1.7PB0.45PB Simulation Storage0.1PB1.5PB1.2PB0.36PB Calibration Storage0.00.4PB0.01PB0.01PB

11 LHCb Software Meeting 28.03.01 Glenn Patrick11 Physics Use-Cases Baseline model assumes: Production Centre stores all phases of data (RAW, ESD, AOD and TAG). CERN is production centre for real data. TAG and AOD datasets shipped to Regional Centres. Only 10% of ESD data moved to outside centres. LHCb has smaller dataset sizes (but perhaps more specialised requirements)  more options available? Even with 2 x 10 9 events/year, total AOD sample is only 40 TB/year.

12 LHCb Software Meeting 28.03.01 Glenn Patrick12 Analysis Interface Gaudi meets the Grid? Gaudi Services Application Manager Job Options Service Detector Description EventData Service Histogram Service Message Service Particle Property Service GaudiLab Service Grid Services Information Services Scheduling Security Monitoring Data Management Service Discovery Database Service? Meta Data Data Standard Interfaces & Protocols Most Grid services are producers or consumers of meta-data Logical DataStores Event Detector Histogram Ntuple

13 LHCb Software Meeting 28.03.01 Glenn Patrick13 High Level Interfaces Need to define high-level Grid interfaces essential to Gaudi, especially relating to data access. For example: Data Query Data Locator Data Mover CASTOR HPSS Other MSS Medium Level Low Level High Level Data Replication

14 LHCb Software Meeting 28.03.01 Glenn Patrick14 Analysis and the Grid In the Grid, analysis appears to be seen as a series of hierarchical queries (cuts) on databases/datasets: eg. (PTRACK < 150.0) AND (RICHpid = pion) Architectures based on multi-agent technology. Intelligent agent is a software entity with some degree of autonomy and can carry out operations on behalf of a user or program. Need to define “globally unique” LHCb namespace(s). ATF proposes using URI syntax… eg. http://lhcb.cern.ch/analy/Bpipi/event1.dat

15 LHCb Software Meeting 28.03.01 Glenn Patrick15 Agent Architecture (Serafini et al) User 2User 1 User n Agent Based Query Facilitator Query Execution Strategies Caching Strategies INDEX MSS 1 Cache/Disk Tape robotics MSS 2 Cache/Disk Tape robotics MSS k Cache/Disk Tape robotics Contains variety of agents: User agents Index agents MSS agents

16 LHCb Software Meeting 28.03.01 Glenn Patrick16 RAL CSF 236 Linux cpu IBM 3494 tape robot LIVERPOOL MAP 300 Linux cpu CERN RAL (PPD) Bristol Imperial College Oxford GLASGOW/ EDINBURGH “Proto-Tier 2” Evolving LHCb Analysis Testbeds? Institutes RAL DataGrid Testbed Cambridge FRANCE ITALY :

17 LHCb Software Meeting 28.03.01 Glenn Patrick17 Conclusions 1.Need better understanding of how Data Model will really work for analysis. Objects versus files? 2.Pragmatic study of performance/topology/limitations of national (and international) networks.  feed back into 1. 3.Require definition of high-level Grid services which can be exploited by Gaudi. Agent technology? 4.Need some realistic “physics” use-cases.  feed back into 1 and 3. 5.Accumulate experience of running Gaudi in a distributed environment (eg.CERN  UK).


Download ppt "LHCb Software Meeting 28.03.01 Glenn Patrick1 First Ideas on Distributed Analysis for LHCb LHCb Software Week CERN, 28th March 2001 Glenn Patrick (RAL)"

Similar presentations


Ads by Google