GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
15/07/2010Swiss WLCG Operations Meeting Summary of the last GridKA Cloud Meeting (07 July 2010) Marc Goulette (University of Geneva)
AMOD Report Doug Benjamin Duke University. Hourly Jobs Running during last week 140 K Blue – MC simulation Yellow Data processing Red – user Analysis.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Storage issues for end-user analysis Bernd Panzer-Steindel, CERN/IT 08 July
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Storage Classes report GDB Oct Artem Trunov
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Stephen Gowdy FNAL 9th Feb 2015CMS Computing Model Simulation 1.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
IT-DSS Alberto Pace2 ? Detecting particles (experiments) Accelerating particle beams Large-scale computing (Analysis) Discovery We are here The mission.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
ADC Operations Shifts J. Yu Guido Negri, Alexey Sedov, Armen Vartapetian and Alden Stradling coordination, ADCoS coordination and DAST coordination.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LCG Service Challenge: Planning and Milestones
Overview of the Belle II computing
Belle II Physics Analysis Center at TIFR
U.S. ATLAS Tier 2 Computing Center
Data Challenge with the Grid in ATLAS
ATLAS activities in the IT cloud in April 2008
Vanderbilt Tier 2 Project
Update on Plan for KISTI-GSDC
Farida Fassi, Damien Mercie
LHCb Software & Computing Status
Dagmar Adamova, NPI AS CR Prague/Rez
Luca dell’Agnello INFN-CNAF
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Southwest Tier 2.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Project Status Report Computing Resource Review Board Ian Bird
Ákos Frohner EGEE'08 September 2008
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Computing Infrastructure for DAQ, DM and SC
DØ MC and Data Processing on the Grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600 TB) copied Data consistency check showed only minor problems Import of first LHC collision data without any problems Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s Marian Zvada started his job as CMS admin on Feb. 1 st, cofinanced by GridKa and EKP(BMBF) Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600 TB) copied Data consistency check showed only minor problems Import of first LHC collision data without any problems Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s Marian Zvada started his job as CMS admin on Feb. 1 st, cofinanced by GridKa and EKP(BMBF) Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600 TB) copied Data consistency check showed only minor problems Import of first LHC collision data without any problems Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s Marian Zvada started his job as CMS admin on Feb. 1 st, cofinanced by GridKa and EKP(BMBF) Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600 TB) copied Data consistency check showed only minor problems Import of first LHC collision data without any problems Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s Marian Zvada started his job as CMS admin on Feb. 1 st, cofinanced by GridKa and EKP(BMBF) Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: GridKa took its share of CMS T1 computing: - data import from T0 (250 TB of 2000 TB total) - MC import from T2 (200 TB of 2000 TB total) - and data processing (typically 10% of jobs) - data export to T1 and T2 (900 TB of 8000 TB total) Since Aug also MC T1s including GridKa Some general („cooling incident“) and CMS-specific problems (dCache head node, re-configuration of VOMS-roles) caused efficiency loss 2 PB Overall: first LHC data taking period successfully mastered at GridKa ! keeping efficiency high requires careful and steady monitoring support and fast reaction by GridKa staff essential

GridKa Summer 2010 Jobs by German CMS users (D-CMS) play an increasingly significant role at GridKa Dedicated jobs slots for VOMS role DCMS had been used since long German CMS computing team porposed and implemented concept to use national GridKa New: - tape and disk storage made availale by GridKa - twiki for German CMS users set up - improved monitoring: publish data sets available on disk at GridKa - disk-only space to be used as temporary storage for local and remote job output for user analyis - tape storage to be accessed by German data admins only to archive analysis data sets National GridKa in September ~20% of CMS jobs in September were D-CMS