Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

CERN IT Department CH-1211 Geneva 23 Switzerland t Marcin Blaszczyk, IT-DB Atlas standby database tests February.
CERN - IT Department CH-1211 Genève 23 Switzerland t Relational Databases for the LHC Computing Grid The LCG Distributed Database Deployment.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
CERN IT Department CH-1211 Geneva 23 Switzerland t T0 report WLCG operations Workshop Barcelona, 07/07/2014 Maite Barroso, CERN IT.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
WLCG Service Report ~~~ WLCG Management Board, 27 th October
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
Introduction: Distributed POOL File Access Elizabeth Gallas - Oxford – September 16, 2009 Offline Database Meeting.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
ATLAS Scalability Tests of Tier-1 Database Replicas WLCG Collaboration Workshop (Tier0/Tier1/Tier2) Victoria, British Columbia, Canada September 1-2, 2007.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
LCG 3D Project Status and Production Plans Dirk Duellmann, CERN IT On behalf of the LCG 3D project CHEP 2006, 15th February, Mumbai.
ATLAS Database Operations Invited talk at the XXI International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, September 2007 Alexandre.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
GDB March User-Level, VOMS Groups and Roles Dave Kant CCLRC, e-Science Centre.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Database Readiness Workshop Summary Dirk Duellmann, CERN IT For the LCG 3D project SC4 / pilot WLCG Service Workshop.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
SC4 Planning Planning for the Initial LCG Service September 2005.
Database Competence Centre openlab Major Review Meeting nd February 2012 Maaike Limper Zbigniew Baranowski Luigi Gallerani Mariusz Piorkowski Anton.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
WLCG Planning Issues GDB June Harry Renshall, Jamie Shiers.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
3D Project Status Dirk Duellmann, CERN IT For the LCG 3D project Meeting with LHCC Referees, March 21st 06.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
CERN IT Department CH-1211 Geneva 23 Switzerland t WLCG Operation Coordination Luca Canali (for IT-DB) Oracle Upgrades.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
David Stickland CMS Core Software and Computing
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
WLCG Service Report Jean-Philippe Baud ~~~ WLCG Management Board, 24 th August
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 6 April 2005.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
ATLAS Database Deployment and Operations Requirements and Plans
Dirk Duellmann CERN IT/PSS and 3D
LCG Service Challenge: Planning and Milestones
Database Services at CERN Status Update
Database Readiness Workshop Intro & Goals
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Conditions DB Data Distribution
Conditions Data access using FroNTier Squid cache Server
Workshop Summary Dirk Duellmann.
LHC Data Analysis using a worldwide computing grid
Dirk Duellmann ~~~ WLCG Management Board, 27th July 2010
The LHCb Computing Data Challenge DC06
Presentation transcript:

Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)

2 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Database Services for the Grid Database access is vital for the real data operations –Access to calibration data is critical to the event data reconstruction –Other use cases also require distributed database access WLCG 3D Project: Focus on Oracle streaming to Tier 1 –ATLAS –LHCb FroNTier technology for database data caching at Tier 1 and beyond –CMS –ATLAS (evaluating) Regular updates of DB requirements

3 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 WLCG Collaboration Workshop (Tier0/Tier1/Tier2)

4 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Experiments' Session Separate updates of online and offline requirements Collect the main resource requirements pointing out the proposed changes for the next six months –present them to the sites this GDB Meeting

5 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Baseline Requirements Presented at the previous GDB Meeting:

6 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007

7 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007

8 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007

26 Jan 2007CMS Req. Update and Plans9 Rough Estimates of CMS DB Resources March 2007 through August 2007 Area (CMS contact) Disk Maximum (GB) Concurrent Users Peak usage Transactions Peak usage (Hz) Online P Offline Conditions Tier-0 (CMSR) 500 (DB) 100 (per Squid) 2-3 Squids 10 (DB)* 10 (Squid) * Incl. On2off Xfer 10 (DB)* 10 (Squid) * Incl. On2off Xfer Offline Conditions Tier-1 (each site) 100 (per Squid) 2-3 Squids/site 10(per Squid) >100  all sites 10 per (Squid) >100  all sites Offline DBS Tier-0 (CMSR) 2010 (currently ~2) 10 (currently ~5)

26 Jan 2007CMS Req. Update and Plans10 Production Hardware Needs Offline Database: The RAC (CMSR) is sufficient and support has been very good. Upgrade to 8-nodes is planned and will be used. Frontier: cmsfrontier1,2,3 –System is now in place with 3 machines, load balancing and failover via DNS Round Robin. –Limited by 100 Mbps ethernet. Will install 2 additional Squid servers that are well connected, network-wise, to processing resources at CERN, like CMS T0 (prompt reco) farm, Grid CE’s at CERN, et cetera. Dataset Bookkeeping Service (DBS): –Lxgate40 is current production server. –Will request additional server to load balance w/ lxgate40. –Will request additional server dedicated to Tier-0 farm.

26 Jan 2007CMS Req. Update and Plans11 CMS Squid Deployment Squids deployed at 30 Tier 1 and 2 sites. –Tier 1: LCG: ASCC, IN2P3, PIC, RAL, CNAF, FZK, CERN and OSG: FNAL –Tier 2:LCG: Belgium, Legnaro, Bari, CIEMAT, DESY, Estonia, CSCS, Pisa, Taiwan, GRIF, Rome, OSG: UCSD, Purdue, Caltech, Nebraska, Florida, Wisconsin, MIT + Will install additional Tier-2, and begin installing at Tier-3 sites as needed. –Possibly 20 or 30 additional sites by August –Including: Budapest, Imperial, NCU/NTU, ITEP, SINP, JINR, IHEP, KNU,

12 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 ATLAS Schedule Preparation for ATLAS CDC in Spring: –Install SRM 2.2 and commission: March/mid-April –3D services for CDC production: February Readiness for the final phase of ATLAS FDR in September-October: –3D services stress testing: June/July –FDR initial phase: July/August –Max-out T1 3D capacities: August/September most T1s now have only two nodes for 3D ATLAS DB will need several TB per year –3D Production running: October-December

13 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Request to max out T1 3D capacities We have to max out (fully utilize) the available resources: We expect to be memory bound, not CPU bound –Since mostly jobs request the same data Max out request: There are six Oracle licenses per T1 (for ATLAS use) –If you have not used all your Oracle licenses – use it Deploy the third node If your Oracle nodes do not have 4 GB of memory –Upgrade to 4 GB of memory If your Oracle nodes do not have 64-bit Linux –Upgrade to 64-bit Linux In preparation for memory upgrade beyond 4 GB Since this is a rather modest upgrade request, the upgrade can be accomplished by August/September

14 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Summary of ATLAS Request for Resources Readiness for Calibration Data Challenge (by March): Upgrade 3D services to production level Readiness for Final Dress Rehearsal: Max out request (by August/September): –Deploy the third node for ATLAS (for sites without third node) –Upgrade to 4 GB of memory (for sites with less than 4 GB of memory) –Upgrade to 64-bit Linux (for sites without 64-bit Linux) – In preparation for memory upgrade beyond 4GB Readiness for Real Data: Disk space upgrade (by December) –T1 sites with Oracle disk storage less than 1 TB for ATLAS user tablespace are requested to double their capacities

15 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 ATLAS Disk Schedule for T1 Databases Gancho and Florbela made detailed estimates for ATLAS nominal year (including indexes overhead) –COOL storage 0.8 TB per year –TAGS storage 6 TB per year (200 Hz during active data taking, 2·10 9 events per year) We need to ramp up T1 Oracle capacities to these numbers –The priority is storage for COOL ATLAS TAGS can be stored outside of Oracle, in ROOT files Ramping up to ATLAS nominal running conditions: 2008 will be 40% of a nominal year (starting in June) –0.2 TB COOL TB TAGS 2009 will be 60% of a nominal year –0.5 TB COOL TB TAGS

16 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Baseline Requirements Update towards LHC turn-on ATLAS T1 –3 node db server, 0.3 TB usable DB space ramp-up to 0.6 TB LHCb T1 –2 node db server, 0.1 TB usable DB space ramp-up to 0.2 TB CMS T1+2 –2-3 squid server nodes, 0.1 TB cache space per node Towards LHC turn-on: ATLAS and LHCb –For the LHC turn-on doubling of the storage requested for 2006 CMS –Currently no major upgrades requested and beyond: ATLAS - first LHC operations ramp-up estimate –2008: 1.6 TB for Oracle user tablespace Not including site-specific storage for backup and mirroring

Backup Slides

18 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 Summary of Online Requirements Not covered in this (offline) summary

19 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 ATLAS Disk Storage Request T1 siteDisks (TB) NodesCPU/ node Memory (GB) 64-bitComment IN2P30.524yesShared with LHCb CNAF1.024 RAL0.422 Gridka0.624 BNL0.424ATLAS site ASGC1.624yesExpandable to 10TB TRIUMF3.122yes NDGF3D Phase II T1 site PIC3D Phase II T1 site SARA3D Phase II T1 site T1 sites with Oracle disk storage less than 1 TB are requested to double their capacities by December Current capacities for ATLAS (collected by Gancho and Florbela)

20 Alexandre VaniachineWLCG GDB Meeting, CERN, Geneva, Switzerland, February 7, 2007 ATLAS Requirements for FroNTier We can deal with FroNTier cache consistency largely by policies –e,.g only accessing the data by frozen CondDB tag so the cache result for this query will not go stale) –and/or by having the cached data expire after a certain time as shown by CMS in their FroNTier presentation this week – which is quite encouraging If ATLAS choose the FroNTier alternative the major issue will be to get the T2 sites to deploy squid caches –in principle, this is straightforward as shown by the CMS deployment at ~20 T2 sites now – but of course requires some hardware resources and some support at the T2s