1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.

Slides:



Advertisements
Similar presentations
Level 1 Components of the Project. Level 0 Goal or Aim of GridPP. Level 2 Elements of the components. Level 2 Milestones for the elements.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Computing Panel Discussion Continued Marco Apollonio, Linda Coney, Mike Courthold, Malcolm Ellis, Jean-Sebastien Graulich, Pierrick Hanlet, Henry Nebrensky.
M. Ellis - 11th February 2004 Sci Fi Cosmic Light Yield 2 views 3 views.
1 Timescales Construction LCR MICE Beam Monitoring counters + DAQ My understanding Jan 07 Feb 07 Mar 07 Apr 07 May 07 Jun 07 Jul 07 Aug 07 Sep 07 ISIS.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Organisation Management and Policy Group (MPG): Responsible for setting and policy decisions and resolving any issues concerning fractional usage, acceptable.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Spending Plans and Schedule Jae Yu July 26, 2002.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Software Status  Last Software Workshop u Held at Fermilab just before Christmas. u Completed reconstruction testing: s MICE trackers and KEK tracker.
Caitriana Nicholson, CHEP 2006, Mumbai Caitriana Nicholson University of Glasgow Grid Data Management: Simulations of LCG 2008.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
January 31, MICE DAQ MICE and ISIS Introduction MICE Detector Front End Electronics Software and MICE DAQ Architecture MICE Triggers Status and Schedule.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
M. Ellis - MICE Collaboration Meeting - Thursday 28th October Sci-Fi Tracker Performance Software Status –RF background simulation –Beam simulation.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Tracker Cosmic Ray Test 2011 Linda R. Coney UC Riverside CM 29 - February 16, 2011.
Detector Summary Tracker. Well, as far as the tracker hardware is concerned, we are done. – Need to do the system test to make sure nothing has degraded.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
Software Overview 1M. Ellis - CM21 - 7th June 2008  Simulation Status  Reconstruction Status  Unpacking Library  Tracker Data Format  Real Data (DATE)
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Online – Data Storage and Processing
Ian Bird WLCG Workshop San Francisco, 8th October 2016
MICE Computing and Software
Pasquale Migliozzi INFN Napoli
Bulk production of Monte Carlo
Software Session Introduction
MICE Collaboration Meeting Saturday 22nd October 2005 Malcolm Ellis
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ILD Ichinoseki Meeting
LHC Collisions.
Example of DAQ Trigger issues for the SoLID experiment
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd

2 Introduction During January Paul and I had a meeting at Brunel to make a first stab at estimates for data storage and transfer requirements for MICE. Based on some rough estimates (hopefully biased towards the worst case) of data size and rate and aim to develop a plan to securely maintain all the data that is taken as well as distribute it throughout the collaboration for processing and analysis. Most of this talk consists of slides that were presented by Paul at a MICE-UK meeting recently.

3 Data Flow in MICE Mice Hall Fermilab Tier-1CERN Tier-1KEK/Osaka Tier-1 Atlas Tier-0

4 Atlas Centre and MICE Hall

5 Data rate from Detectors Tracker can conceivably operate in several data taking modes: –Discriminators only: allows higher trigger rate, but results in lowest event size and data rate. –Discriminators and ADC/TDC information: required for calibration and in the event of high RF background rate. Will probably result in a lower trigger rate being achieved, but due to much larger event size will produce the largest data rate. First estimate of PID detectors indicates that tracker produces the bulk of the data (when it writes out ADCs and TDCs)

6 Data Rate from the detector Estimate: Tracker data Rate if full data readout is required for the tracker 14 MBytes/s (110 MBits/s) – all other data negligible Data per hour50 GigaBytes Data per day1 TeraByte ISIS efficiency ~75% over an extended period Running schedule: unknown – assume the LHC 4 months. Data per year120 TeraBytes Data for MICE<0.5 PetaBytes

7 Data Rate from Monte Carlo Unknown: Assume Monte Carlo totals are the same as data. Data per year120 TeraBytes Data for MICE<0.5 PetaBytes Plan to use Monte Carlo pre-startup to test data flow rates

8 Hardware requirements for the hall Local storage needed to buffer data before dispatch to Atlas and to allow running when link or centre is unavailabledata deleted when TWO copies exist. Data per day1 TeraByte Weekend running – Friday to Monday ~ 3 TeraBytes Disk server with 10 eide drives – each 500 GigaBytes. 2*5 RAID-5 arrays – striped across all 10 disks 8 TeraBytes By the time is comes to purchase the disks,larger capacity disks will be available

9 Hardware for link to Tier-0 Tracker data Rate if full data readout is required for the tracker 14 MBytes/s (110 MBits/s) Peak must allow reverse traffic for remote operation of detector subsystems. Transferring data delayed by unavailability of Atlas storage. Gigabyte link – gives approximately a factor 4 in safety (at 40% collisions become a problem).

10 Hardware at Tier-0 Data per year120 TeraBytes Data for MICE<0.5 PetaBytes 400 TeraBytes tape for data – none for Monte Carlo TeraBytes of disk space ( MC is divided between the four centres ) Ability to write tape data at 110 Mbytes/s ( see data security ) *2 for monte- carlo

11 Hardware links from Tier-0 Tracker data Rate if full data readout is required for the tracker 14 MBytes/s (110 MBits/s) Ability to copy the data to one remote site in real time 110 MBits per second off-site average rate Higher unless two tier-1 sites can read 110 Mbits per second and relay at the same rate.

12 Hardware at Tier-1 Data per year120 TeraBytes Data for MICE<0.5 PetaBytes TeraBytes of disk space ( Fast storage ) 400 TeraBytes near-line storage Ability to receive data at 110 MBits/s and for two of them to relay at this rate.

13 Note on tier-1 data links CERN can clearly handle the traffic. FermiLab can handle the traffic. Japan may be a problem. if so transfer selectively processed data. Not established if they will

14 Note on data security MICE data will consider to be safe when there are two distinct copies. During the writing of a single file in the hall only 1 copy. Writing to Tier-0 create a second copy. Deletion of the hall copy is allowed when an extra external copy has been created. In normal running this will be copy to a tier-1 centre. ( The first copy will be rotated between tier-1s ) If this is not possible we will create immediate tape copies at the tier-0. ( alternative external sites are not considered to be helpful ) Bookkeeping decisions will influence data taking rules

15 Data Flow in MICE Mice Hall 4 TeraByte disk 1 GigaBit Link 400 TB Fermilab Tier MegaBit Link 150TB fast storage 400TB slow storage CERN Tier MegaBit Link 150TB fast storage 400TB slow storage KEK/Osaka Tier MegaBit Link 150TB fast storage 400TB slow storage 110 MegaBit relay Two copies of data required at all times 150 TB Atlas Tier-0 1 GigaBit Link

16 Data Processing Plan to use LCG for both Monte Carlo production and real data analysis. Have made a slow start, GRID hardware installed in the UK for MICE (Sheffield and Brunel) and a VO has been created. Post-Osaka, plan to ramp up GRID activities through G4MICE simulation studies. Larger productions and transfers of large volumes of Monte Carlo can be used to test data-taking systems.

17 Local Processing Some sort of farm will be required on the floor (or at least on the RAL site) in order to provide feedback for beam tuning and for online monitoring. An estimate from Tom Roberts is that during beamline tuning runs of approximately 10k events in a minute will be collected and will need to be reconstructed (to determine Twiss parameters). This reconstruction will need to be performed on a comparable time scale to allow quick feedback to the beamline settings.

18 Next Steps Need to become more definite about event sizes, trigger rates and running periods to confirm estimate of total data storage and transfer rate requirements. Reconstruction will need to be tested (once at a sufficient level for online work) to estimate online processing needs. Paul has already started on a list of networking/storage requirements for the RAL site. Need to establish local contacts at FNAL, CERN and Japan for Tier-2 centres. Alan has started process of arranging computing resources at FNAL for MICE.