Download presentation
Presentation is loading. Please wait.
Published byHarry Armstrong Modified over 9 years ago
1
Summary Distributed Data Analysis Track F. Rademakers, S. Dasu, V. Innocente CHEP06 TIFR, Mumbai
2
CHEP06, 17 Feb 20062Fons Rademakers Outline Introduction Distributed Analysis Systems Submission Systems Bookkeeping Systems Monitoring Systems Data Access Systems Miscellaneous Conveners’ impressions We have only 20 min for the summary and therefore cannot do justice to all talks
3
CHEP06, 17 Feb 20063Fons Rademakers Track Statistics Lies, damn lies and statistics: there were 23 talks number of cancellations 2 number of no-shows 1 average attendance 25 minimum attendance 12 maximum attendance 55 average duration of talks 23 min equipment failures 1 (laser pointer) average outside temperature 31 C average room temperature 21 C
4
CHEP06, 17 Feb 20064Fons Rademakers What Was This Track All About? DIAL ProdSys BOSS Ganga Analysis Systems PROOF CRAB Submission Systems DIRAC PANDA Bookkeeping Systems JobMon BOSS BbK Monitoring Systems DashBoard JobMon BOSS MonaLisa Data Access Systems xrootd SAM Miscellaneous Go4 ARDA Grid Simulations AJAX Analysis
5
CHEP06, 17 Feb 20065Fons Rademakers Data Analysis Systems ALICEATLASCMSLHCb PROOFDIAL GANGA CRAB PROOF GANGA All systems support, or plan to support, parallelism Except for PROOF all systems achieve parallelism via job splitting and serial batch submission (job level parallelism) The different analysis systems presented, categorized by experiment:
6
CHEP06, 17 Feb 20066Fons Rademakers Classical Parallel Data Analysis Storage Batch farm queues manager outputs catalog “Static” use of resources Jobs frozen, 1 job / CPU “Manual” splitting, merging Limited monitoring (end of single job) Possible large tail effects submit files jobs data file splitting myAna.C merging final analysis query From PROOF System by Ganis [98]
7
CHEP06, 17 Feb 20067Fons Rademakers Interactive Parallel Data Analysis catalog Storage Interactive farm scheduler query Farm perceived as extension of local PC More dynamic use of resources Automated splitting and merging Real time feedback Much better control of tail effects MASTER query: data file list, myAna.C files final outputs (merged) feedbacks (merged) From PROOF System by Ganis [98]
8
CHEP06, 17 Feb 20068Fons Rademakers
9
CHEP06, 17 Feb 20069Fons Rademakers DIAL Distributed Interactive Analysis of Large Datasets A useful DIAL system has been deployed for ATLAS Common analysis transformations Access to current data For AOD to histograms and large samples, 15 times faster than a single process Easy to use ROOT interface Web-based monitoring Packaged datasets, applications and example tasks Demonstrated viability of remote processing Via Condor-G or PANDA Need interactive queues at remote sites With corresponding gatekeeper or DIAL service Or improve PANDA responsiveness From DIAL by Adams [39]
10
CHEP06, 17 Feb 200610Fons Rademakers
11
CHEP06, 17 Feb 200611Fons Rademakers Ganga Designed for data analysis on the Grid LHCb will do all its analysis on T1’s T2’s mostly for simulation System should not be general – we know all main use cases Use prior knowledge Identified use pattern Aid user in Bookkeeping aspects Keeping track of many individual jobs Developed in cooperation between LHCb and ATLAS From LHCb Experiences by Egede [317]
12
CHEP06, 17 Feb 200612Fons Rademakers
13
CHEP06, 17 Feb 200613Fons Rademakers
14
CHEP06, 17 Feb 200614Fons Rademakers
15
CHEP06, 17 Feb 200615Fons Rademakers CRAB Makes it easy to create large number of user analysis jobs Assume all jobs are the same except for some parameters (event number to be accessed, output file name…) Allows to access distributed data efficiently Hiding WLCG middleware complications. All interactions are transparent for the end user Manages job submission, tracking, monitoring and output harvesting User doesn’t have to take care about how to interact with sometimes complicated grid commands Leaves time to get a coffee … Uses BOSS as Grid independent submission engine From CRAB by Corvo [273]
16
CHEP06, 17 Feb 200616Fons Rademakers
17
CHEP06, 17 Feb 200617Fons Rademakers Submission Systems ALICEATLASCMSLHCb AliEn (not presented) ProdSys PanDA BOSSDIRAC These systems are the DDA launch vehicles for the Grid based batch analysis solutions The different submission systems, categorized by experiment:
18
CHEP06, 17 Feb 200618Fons Rademakers ATLAS Strategy ATLAS will use all three main Grids: LCG/EGEE OSG NorduGrid ProdSys was developed to provide seamless access to all ATLAS grid resources At this point emphasis on batch model to implement the ATLAS Computing model Interactive solutions are difficult to realize on top of the current middleware layer We expect our users to send large batches of short jobs to optimize their turnaround Scalability Data Access From ATLAS Strategy by Liko [263]
19
CHEP06, 17 Feb 200619Fons Rademakers
20
CHEP06, 17 Feb 200620Fons Rademakers
21
CHEP06, 17 Feb 200621Fons Rademakers
22
CHEP06, 17 Feb 200622Fons Rademakers BOSS Batch Object Submission System A tool for batch job submission, real time monitoring and book keeping Interfaced to many schedulers both local and grid Utilizes relational database for persistency Full logging and bookkeeping information stored Job commands: submit, kill, query and output retrieval Can define custom job types which allows specify monitoring unique to the submitted application Significant new functionality identified and being actively integrated into BOSS From Evolution of BOSS by Wakefield [240]
23
CHEP06, 17 Feb 200623Fons Rademakers BOSS Workflow boss submit boss query boss kill BOSS DB BOSS Scheduler farm node Wrapper User specifies job - parameters including: Executable name. Executable type - turn on customized monitoring. Output files to retrieve (for sites without shared file system and grid). User tells Boss to submit jobs specifying scheduler i.e. PBS, LSF, SGE, Condor, LCG, GLite etc.. Job consists of job wrapper, Real time monitoring service and users executable. From Evolution of BOSS by Wakefield [240]
24
CHEP06, 17 Feb 200624Fons Rademakers DIRAC
25
CHEP06, 17 Feb 200625Fons Rademakers
26
CHEP06, 17 Feb 200626Fons Rademakers
27
CHEP06, 17 Feb 200627Fons Rademakers Data Access Systems The different data access systems that were presented: SAM Used by CDF in its CAF environment xrootd server Used by BaBar, ALICE, STAR All BaBar sites run xrootd, extensive deployment experience Winner of the SC05 throughput test Performs better than even the developers ever expected and had hoped for xrootd client Many improvements in the xrootd client side code Reduce latencies using asynchronous read ahead, client side caching and asynchronous opens
28
CHEP06, 17 Feb 200628Fons Rademakers
29
CHEP06, 17 Feb 200629Fons Rademakers
30
CHEP06, 17 Feb 200630Fons Rademakers
31
CHEP06, 17 Feb 200631Fons Rademakers Acknowledgments A big thank you to the organizers And to the speakers for the high quality talks Especially the ones of whom the talks were not properly summarized Hope to see you all at CHEP07 to see how the Distributed Data Analysis Systems have evolved
32
CHEP06, 17 Feb 200632Fons Rademakers Distributed Data analysis tools are of strategic importance GANGA, DIAL, CRAB, PROOF, … They can be a real differentiator There is a large development activity going on in this area However, none of these tools have yet been exposed to the expected large number of final analysis users Development of a plethora of grid independent access layers DIRAC, BOSS, ALiEn, PanDA, … Gap between the grid middleware capabilities and user needs, especially data location, placement and bookkeeping services, left room for this activity Although appropriate now, convergence to one or two tools is desired CPU and data intensive portion of analysis is most suited for the grid Skimming and organized “rootTree making” is enabled by these DDA tools Advantage of adapting production style tools to analysis Can one adapt other stuff from production toolbox? Bookkeeping? Avoid arcane work-group level bookkeeping that is common currently Interactive analysis on grid with its large latencies PROOF is taking advantage of co-located CPUs for interactive analysis In the era of multi-core CPUs this is only natural Provides incremental data merging for prompt feedback to users Most DDA tools coupled to high-latency batch systems aren’t quite capable Block reservation of co-located nodes, a la Condor MPI Universe, may enable PROOF capabilities over the grid High throughput AND low latency storage access critical for analysis Attention to performance boosting by deferred opens, caching and read-ahead by xrootd team is encouraging Conveners’ Observations
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.