Presentation is loading. Please wait.

Presentation is loading. Please wait.

D. Duellmann, CERN Data Management at the LHC1 Data Management at CERN’s Large Hadron Collider (LHC) Dirk Düllmann CERN IT/DB, Switzerland

Similar presentations


Presentation on theme: "D. Duellmann, CERN Data Management at the LHC1 Data Management at CERN’s Large Hadron Collider (LHC) Dirk Düllmann CERN IT/DB, Switzerland"— Presentation transcript:

1 D. Duellmann, CERN Data Management at the LHC1 Data Management at CERN’s Large Hadron Collider (LHC) Dirk Düllmann CERN IT/DB, Switzerland http://cern.ch/dbhttp://pool.cern.ch

2 D. Duellmann, CERNData Management at the LHC 2 Outline Short Introduction to CERN & LHC Data Management Challenges The LHC Computing Grid (LCG) LCG Data Management Components Object Persistency and the POOL Project Connecting to the GRID – LCG Replica Location Service

3 CERN - The European Organisation for Nuclear Research The European Laboratory for Particle Physics Fundamental research in particle physics Designs, builds & operates large accelerators Financed by 20 European countries (member states) + others (US, Canada, Russia, India, ….)  ~€650M budget - operation + new accelerators  2000 staff + 6000 users (researchers) from all over the world Next Major Research Project - LHC start ~2007 4 LHC Experiments, each with 2000 physicists, 150 universities, apparatus costing ~€300M, computing ~€250M to setup, ~€60M/year to run 10-15 year lifetime

4 D. Duellmann, CERNData Management at the LHC 4 airport Computer Centre Geneva  27km 

5 D. Duellmann, CERNData Management at the LHC 5 The LHC machine Two counter- circulating proton beams Collision energy 7+7 TeV 27 Km of magnets with a field of 8.4 Tesla Super-fluid Helium cooled to 1.9°K The world’s largest superconducting structure

6 D. Duellmann, CERNData Management at the LHC 6 online system multi-level trigger filter out background reduce data volume from 40TB/s to 500MB/s level 1 - special hardware 40 MHz (40 TB/sec) level 2 - embedded processors level 3 - PCs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (500 MB/sec) data recording & offline analysis

7 D. Duellmann, CERNData Management at the LHC 7 LHC Data Challenges 4 large experiments, 10-15 year lifetime Data rates: 500MB/s – 1.5GB/s Total data volume: 12-14PB / year Several hundred PB total ! Analysed by thousands of users world-wide Data reduced from “raw data” to “analysis data” in a small number of well-defined steps

8 interactive physics analysis batch physics analysis batch physics analysis detector event summary data raw data event reprocessing event reprocessing event simulation event simulation analysis objects (extracted by physics topic) Data Handling and Computation for Physics Analysis event filter (selection & reconstruction) event filter (selection & reconstruction) processed data les.robertson@cern.ch CER N

9 LHC Other experiments LHC Other experiments Moore’s law Planned capacity evolution at CERN Mass Storage Disk CPU

10 physics group regional group les.robertson@cern.ch CERN Tier2 Lab a Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Tier3 physics department    Desktop Germany Tier 1 USA UK France Italy ………. CERN Tier 1 ………. The LHC Computing Centre Multi Tiered Computing Models - Computing Grids

11 D. Duellmann, CERNData Management at the LHC 11 LHC Data Models LHC data models are complex! Typically hundreds (500-1000) of structure types (classes in OO) Many relations between them Different access patterns LHC experiments rely on OO technology OO applications deal with networks of objects Pointers (or references) are used to describe inter object relations Need to support this navigational model in our data store Event TrackList TrackerCalor. Track Track Track Track Track HitList Hit Hit Hit Hit Hit

12 D. Duellmann, CERNData Management at the LHC 12 What is POOL? POOL is the common persistency framework for physics applications at the LHC P ool O f persistent O bjects for L HC Hybrid Store – Object Streaming & Relational Database Eg ROOT I/O for object streaming -complex data, simple consistency model (write once) Eg RDBMS for consistent meta data handling -simple data, transactional consistency Initiated in April 2002 Ramping up over the last year from 1.5 FTE to ~10 FTE Common effort between LHC experiments and the CERN Database group project scope and architecture and development => Rapid feedback cycles between project and its users First larger data productions starting now!

13 D. Duellmann, CERNData Management at the LHC 13 Component Architecture POOL (as most other LCG software) is based on a strict component software approach Components provide technology neutral APIs Communicate with other components only via abstract component interfaces Goal: Insulate the very large experiment software systems from concrete implementation details and technologies used today POOL user code is not dependent on any implementation libraries No link time dependency on any implementation packages (e.g. MySQL, Root, Xerces-c..) Component implementations are loaded at runtime via a plug-in infrastructure POOL framework consists of three major, weakly coupled, domains

14 D. Duellmann, CERNData Management at the LHC 14 POOL Components RDBMS Storage Svc

15 D. Duellmann, CERNData Management at the LHC 15 POOL Generic Storage Hierarchy A application may access databases (eg streaming files) from one or more file catalogs Each database is structured into containers of one specific technology (eg ROOT trees or RDBMS Tables) POOL provides a “Smart Pointers” type pool::Ref to transparently load objects from the back end into a client side cache define persistent inter object associations across file or technology boundaries POOL Context FileCatalog Database Container Object

16 D. Duellmann, CERNData Management at the LHC 16 Data Dictionary & Storage Technology dependent Dictionary Generation CINT dictionary I/O Data I/O GCC-XML LCG dictionary code Abstract DDL Code Generator LCG dictionary Gateway Reflection Other Clients C++ Header

17 D. Duellmann, CERNData Management at the LHC 17 POOL File Catalog Files are referred to inside POOL via a unique and immutable file identifier which is system generated at file creation time This allows to provide stable inter-file reference FileID are implemented as Global Unique Identifier (GUID) Allows to create consistent sets of files with internal references -without requiring a central ID allocation service Catalog fragments created independently can later be merged without modification to corresponding data file Logical Naming Object Lookup LFN1 PFN1, technology LFN2 LFNn PFN2, technology PFNn, technology File Identity and metadata

18 D. Duellmann, CERNData Management at the LHC 18 Storage Element EDG Replica Location Services - Basic Functionality Replica Manager Replica Location Service Replica Metadata Catalog Storage Element Files have replicas stored at many Grid sites on Storage Elements. Each file has a unique GUID. Locations corresponding to the GUID are kept in the Replica Location Service. Users may assign aliases to the GUIDs. These are kept in the Replica Metadata Catalog. The Replica Manager provides atomicity for file operations, assuring consistency of SE and catalog contents. james.casey@cern.ch

19 D. Duellmann, CERNData Management at the LHC 19 Storage Element Interactions with other Grid Middleware Components Replica Manager Replica Location Service Replica Optimization Service Replica Metadata Catalog SE Monitor Network Monitor Information Service Resource Broker User Interface or Worker Node Storage Element Virtual Organization Membership Service Applications and users interface to data through the Replica Manager either directly or through the Resource Broker. james.casey@cern.ch

20 D. Duellmann, CERNData Management at the LHC 20 RLS Service Goals To offer production quality services for LCG 1 to meet the requirements of forthcoming (and current!) data challenges e.g. CMS PCP/DC04, ALICE PDC-3, ATLAS DC2, LHCb CDC’04 To provide distribution kits, scripts and documentation to assist other sites in offering production services To leverage the many years’ experience in running such services at CERN and other institutes Monitoring, backup & recovery, tuning, capacity planning, … To understand experiments’ requirements in how these services should be established, extended and clarify current limitations Not targeting small-medium scale DB apps that need to be run and administered locally (to user)

21 D. Duellmann, CERNData Management at the LHC 21 Conclusions Data Management at LHC remains a significant challenge because of data volume, project lifetime, complexity of S/W and H/W setups. The LHC Computing Grid (LCG) approach is based on eg the EDG and GLOBUS Middleware projects and uses a strict component approach for physics application software The LCG-POOL project has developed a technology neutral persistency framework which is currently being integrated into the experiment production systems In conjunction with POOL a data catalog production service is provided to support several upcoming data productions in the 100 of terabyte area

22 D. Duellmann, CERNData Management at the LHC 22 Component Overview Replica Location Index Local Replica Catalog Storage Element CNAF Replica Location Index Local Replica Catalog Storage Element RAL Replica Location Index Local Replica Catalog Storage Element CERN Replica Location Index Local Replica Catalog Storage Element IN2P3

23 D. Duellmann, CERNData Management at the LHC 23 LHC Software Challenges Experiment software systems are large and complex Developed by teams of expert developers Permanent evolution and improvement for years… Analysis is performed by many end user developers Often participating only for short time Usually without strong computer science background Need simple and stable software environment Need to manage change over a long project lifetime Migration to new software, implementation languages New computing platforms, storage media New computing paradigms ??? Data management system needs to be designed such confine the impact of unavoidable change during the project

24 D. Duellmann, CERNData Management at the LHC 24 RAWRAW ESDESD AODAOD TAG random seq. 1PB/yr 100TB/yr 10TB/yr 1TB/yr Data Users Tier0 Tier1 Data Types, Volumes, Distribution & Access

25 D. Duellmann, CERNData Management at the LHC 25 Object Access via Smart Pointers Data Service object cache … TokenObject Persistency Service Object type Storage type Persistent Reference T o k e n Cache Ref Data Service Pointer Ref File Catalog


Download ppt "D. Duellmann, CERN Data Management at the LHC1 Data Management at CERN’s Large Hadron Collider (LHC) Dirk Düllmann CERN IT/DB, Switzerland"

Similar presentations


Ads by Google