Data Management and Database Framework for the MICE Experiment

Slides:



Advertisements
Similar presentations
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Advertisements

New Release Announcements and Product Roadmap Chris DiPierro, Director of Software Development April 9-11, 2014
13/05/2004Janusz Martyniak Imperial College London 1 Using Ganga to Submit BaBar Jobs Development Status.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Computing Panel Discussion Continued Marco Apollonio, Linda Coney, Mike Courthold, Malcolm Ellis, Jean-Sebastien Graulich, Pierrick Hanlet, Henry Nebrensky.
PAWN: A Novel Ingestion Workflow Technology for Digital Preservation
Target Online Software J. Leaver 01/12/ /06/2015Imperial College 2 Target Controller Software Software for Stage 1 upgrade nearing completion –Hardware.
Henry Nebrensky – Data Flow Workshop – 30 June 2009 MICE Data Flow Workshop Henry Nebrensky Brunel University 1.
High-Speed, High Volume Document Storage, Retrieval, and Manipulation with Documentum and Snowbound March 8, 2007.
Experience of a low-maintenance distributed data management system W.Takase 1, Y.Matsumoto 1, A.Hasan 2, F.Di Lodovico 3, Y.Watase 1, T.Sasaki 1 1. High.
Linux Operations and Administration
Software Summary 1M.Ellis - CM23 - Harbin - 16th January 2009  Four very good presentations that produced a lot of useful discussion: u Online Reconstruction.
SOFTWARE & COMPUTING Durga Rajaram MICE PROJECT BOARD Nov 24, 2014.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
Real Time Monitor of Grid Job Executions Janusz Martyniak Imperial College London.
DATABASE MANAGEMENT SYSTEMS IN DATA INTENSIVE ENVIRONMENNTS Leon Guzenda Chief Technology Officer.
Configuration Database MICE Collaboration Meeting 28, Sofia David Forrest University of Glasgow Antony Wilson Science and Technology Facilities Council.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Event Data History David Adams BNL Atlas Software Week December 2001.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
Configuration Database Antony Wilson MICE CM February 2011 RAL 1.
Software Status  Last Software Workshop u Held at Fermilab just before Christmas. u Completed reconstruction testing: s MICE trackers and KEK tracker.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Configuration Database David Forrest 15th January 2009 CM23, HIT, Harbin.
Detector Simulation Presentation # 3 Nafisa Tasneem CHEP,KNU  How to do HEP experiment  What is detector simulation?
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
GridPP Dirac Service The 4 th Dirac User Workshop May 2014 CERN Janusz Martyniak, Imperial College London.
H.G.Essel: Go4 - J. Adamczewski, M. Al-Turany, D. Bertini, H.G.Essel, S.Linev CHEP 2003 GSI Online Offline Object Oriented Go4.
Managing and Monitoring the Microsoft Application Platform Damir Bersinic Ruth Morton IT Pro Advisor Microsoft Canada
Henry Nebrensky – MICE DAQ review - 4 June 2009 MICE Data Flow Henry Nebrensky Brunel University 1.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
MICE Configuration DB Janusz Martyniak, Imperial College London MICE CM39 Software Paralell.
Features Of SQL Server 2000: 1. Internet Integration: SQL Server 2000 works with other products to form a stable and secure data store for internet and.
The GridPP DIRAC project DIRAC for non-LHC communities.
Database David Forrest. What database? DBMS: PostgreSQL. Run on dedicated Database server at RAL Need to store information on conditions of detector as.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
11/01/20081 Data simulator status CCRC’08 Preparatory Meeting Radu Stoica, CERN* 11 th January 2007 * On leave from IFIN-HH.
The Institute of High Energy of Physics, Chinese Academy of Sciences Sharing LCG files across different platforms Cheng Yaodong, Wang Lu, Liu Aigui, Chen.
The GridPP DIRAC project DIRAC for non-LHC communities.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Retele de senzori EEMon Electrical Energy Monitoring System.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
1 Build Your Own MySQL Time Machine Chuck Bell, PhD Mats Kindahl, PhD Replication and Backup Team Sun Microsystems 1.
Installing Windows 7 Lesson 2.
Go4 v2.2 Status & Overview CHEP 2003
Databases and DBMSs Todd S. Bacastow January 2005.
Bulk production of Monte Carlo
Database Replication and Monitoring
MCproduction on the grid
Netscape Application Server
High Availability Linux (HA Linux)
Overview of the Belle II computing
Data Management and Database Framework for the MICE experiment
Database System Concepts and Architecture
Service Operations at the T0/T1 for the ALICE Experiment
Configuration Database
Chapter 2 Database Environment Pearson Education © 2009.
Database Environment Transparencies
Status and plans for bookkeeping system and production tools
Dev10. Eclipse Plug-ins in Notes 101, and then some Teresa Deane, BCC
Sending data to EUROSTAT using STATEL and STADIUM web client
Presentation transcript:

Data Management and Database Framework for the MICE Experiment Janusz Martyniak, Imperial College London Henry Nebrensky, Brunel University Durga Rajaram, IIT Chicago

Muon Ionization Cooling Experiment Ionization cooling is a technique which allows reducing emittance of charged particle beam. The particle total momentum is reduced by absorbers and then re-accelerated which increases a longitudinal momentum component only, thus “cooling” the beam. The picture below shows MICE Step4 setup with high precision scintillating-fibre trackers in a 4T field. 12/10/2016 Janusz Martyniak, Imperial College London

Janusz Martyniak, Imperial College London MICE Data Handling The original Raw Data Mover has been described in a paper: MICE Data handling on The Grid J. Phys.: Conf. Ser. Proceedings of CHEP2013. 513 (2014) http://iopscience.iop.org/article/10.1088/1742-6596/513/3/032063 Briefly: The data files (tarballs) are verified for integrity and copied to permanent tape storage at RAL The system is written in Python and has been upgraded to use gfal2 API (rather than lcg tools), which is more ‘pythonic’ and robust. We use a hardware token to store certificates Semaphores indicate the status of the mover. Nogios is used for monitoring of backlog. Files are being replicated to Imperial and Brunel. 12/10/2016 Janusz Martyniak, Imperial College London

MICE Data Handling - Reconstruction Control Room DATE Event Builder The raw data from DAQ are written to disk as a series of tarballs, one per run, each containing checksum info about its contents. As soon as this happens the reconstruction starts. On completion the file is moved to permanent Castor storage. 07012.tar …… 07099.tar WWW Python GFAL2 API Grid Download Agent MAUS off-line Reconstruction process (Control Room) Castor tape (RAL) SE (e.g. Imperial) FTS 12/10/2016 Janusz Martyniak, Imperial College London

MICE Configuration Database Contains information about the experimental conditions, like geometry description, magnet currents, absorber materials, cabling etc. This is vital to assure accurate simulation and reconstruction. We have set up a fully replicated, hot-standby database system. It contains a firewall-protected read-write master situated in the MICE Control Room and read-only slave(s) running at RAL. We use PostgreSQL as a DBMS. The access to the DB is provided a JAX-WS Web Service layer which provides platform and programming language independent access to data. 12/10/2016 Janusz Martyniak, Imperial College London

Janusz Martyniak, Imperial College London MICE CDB, (cont.) We provide following CDB APIs: C, to allow the Run Control (built on EPICS) system to read detectors parameters (like magnet current) from the hardware and store them in the DB. Use gSOAP based clients. Provide a selection of read-write operations used on run by run basis, e.g magnet currents, absorber settings. C++, to be natively used by the MAUS reconstruction system, use gSOAP C++ bindings. Provide a selection of read only operations to access detector cabling and calibration details, Python – to store certain predefined condition from the Control Room (like geometries), also widely used by users for analysis, use suds. This API provides most complete set of operations, both for writing and reading. 12/10/2016 Janusz Martyniak, Imperial College London

Janusz Martyniak, Imperial College London MICE CDB, (cont.) Java, a selection of read-only operations predominantly used to contact the CDB server from a Google Window Toolkit based viewer. Only slave(s) can be contacted this way. For security reasons database records are never deleted. They could only be updated (like adding end-run information to an existing run) or a new version of a record can be added (e.g. a new magnet setting template) The CDB API is periodically released and the API is bundled with MAUS as a third-party client, but can also be installed stand-alone. 12/10/2016 Janusz Martyniak, Imperial College London

CDB Replication and Recovery Running PostgreSQL in hot-standby mode enables promoting a slave to the master in an event of a master being unusable. This is a build-in procedure in PostgreSQL servers. We use a following procedure at MICE Reconfigure (promote) a slave to become a master. Start a (firewalled!) read-write WS, so the Control Room can now write to the new master. The new master still maintains a public read-only interface as before. Sync remaining slaves if applicable. When an old master is fixed, swap back (an stop a read-write access started in 2.) 12/10/2016 Janusz Martyniak, Imperial College London

Janusz Martyniak, Imperial College London CDB Viewer 12/10/2016 Janusz Martyniak, Imperial College London

Janusz Martyniak, Imperial College London Summary We presented an updated raw and reconstructed data file movement systems: written in Python, using EMI gfal2 API for secure transfers, use a proxy created from a certificate stored on a hardware token, use FTS based download agent to distribute data to other Tier-2 Grid sites, We described MICE Configuration Database: PostgreSQL DBMS – Master in the Control Room and fully replicated hot-standby slaves elsewhere, Web Service layer provided by JAX-WS deployed on Tomcat, Java clients, python clients (suds) and C/C++ clients (gSOAP) provided, Google Window Toolkit based CDB viewer. 12/10/2016 Janusz Martyniak, Imperial College London