CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.

Slides:



Advertisements
Similar presentations
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
A tool to enable CMS Distributed Analysis
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CMS Report – GridPP Collaboration Meeting VIII Peter Hobson, Brunel University22/9/2003 CMS Applications Progress towards GridPP milestones Data management.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
Jamie Shiers February 2004 Assembled from SC4 Workshop presentations + Les’ plenary talk at CHEP The Worldwide LHC Computing Grid Service Experiment Plans.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
CHEP – Mumbai, February 2006 State of Readiness of LHC Computing Infrastructure Jamie Shiers, CERN.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Summary Distributed Data Analysis Track F. Rademakers, S. Dasu, V. Innocente CHEP06 TIFR, Mumbai.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
SC4 Planning Planning for the Initial LCG Service September 2005.
WLCG 1 Service Challenge 4: Preparation, Planning and Outstanding Issues at INFN Workshop sul Calcolo e Reti dell'INFN Jun.
ANALYSIS TOOLS FOR THE LHC EXPERIMENTS Dietrich Liko / CERN IT.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Maria Girone, CERN CMS Experiment Status, Run II Plans, & Federated Requirements Maria Girone, CERN XrootD Workshop, January 27, 2015.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Service Challenge Report Federico Carminati GDB – January 11, 2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
L’analisi in LHCb Angelo Carbone INFN Bologna
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
Data Challenge with the Grid in ATLAS
INFN-GRID Workshop Bari, October, 26, 2004
Database Readiness Workshop Intro & Goals
ALICE Physics Data Challenge 3
Update on Plan for KISTI-GSDC
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Simulation use cases for T2 in ALICE
Partner: LMU (Atlas), GSI (Alice)
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
The ATLAS Computing Model
The LHCb Computing Data Challenge DC06
Presentation transcript:

CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager

Abstract  The LCG Service Challenges are aimed at achieving the goal of a production quality world-wide Grid that meets the requirements of the LHC experiments in terms of functionality and scale.  This talk highlights the main goals of the Service Challenge programme, significant milestones as well as the key services that have been validated in production by the LHC experiments.  The LCG Service Challenge programme currently involves both the 4 LHC experiments as well as many sites, including the Tier0, all Tier1s as well as a number of key Tier2s, allowing all primary data flows to be demonstrated.  The functionality so far addresses all primary offline Use Cases of the experiments except for analysis, the latter being addressed in the final challenge - scheduled to run from April until September prior to delivery of the full production Worldwide LHC Computing Service.

Timeline JanuarySC3 disk repeat – nominal rates capped at 150MB/s JulyTape Throughput tests at full nominal rates! FebruaryCHEP w/s – T1-T1 Use Cases, SC3 disk – tape repeat (50MB/s, 5 drives) AugustT2 Milestones – debugging of tape results if needed MarchDetailed plan for SC4 service agreed (M/W + DM service enhancements) SeptemberLHCC review – rerun of tape tests if required? AprilSC4 disk – disk (nominal) and disk – tape (reduced) throughput tests OctoberWLCG Service Officially opened. Capacity continues to build up. MayDeployment of new m/w release and services across sites November1 st WLCG ‘conference’ All sites have network / tape h/w in production(?) JuneStart of SC4 production Tests by experiments of ‘T1 Use Cases’ ‘Tier2 workshop’ – identification of key Use Cases and Milestones for T2 December‘Final’ service / middleware review leading to early 2007 upgrades for LHC data taking?? O/S Upgrade? Sometime before April 2007!

SC3 -> SC4 Schedule  February 2006  Rerun of SC3 disk – disk transfers (max 150MB/s X 7 days)  Transfers with FTD, either triggered via AliEn jobs or scheduled  T0 -> T1 (CCIN2P3, CNAF, Grid.Ka, RAL)  March 2006  T0-T1 “loop-back” tests at 2 x nominal rate (CERN)  Run bulk T1,T2 (simulation+reconstruction jobs) and send data back to CERN  (We get ready with  April 2006  T0-T1 disk-disk (nominal rates) disk-tape (50-75MB/s)  First Push out (T0 -> T1) of simulated data, reconstruction at T1  (First tests with  July 2006  T0-T1 disk-tape (nominal rates)  T1-T1, T1-T2, T2-T1 and other rates TBD according to CTDRs  Second chance to push out the data  Reconstruction at CERN and remote centres  September 2006  Scheduled analysis challenge  Unscheduled challenge (target T2’s?)

June “Tier2” Workshop  Focus on analysis Use Cases and Tier2s in particular  List of Tier2s reasonably well established  Try to attract as many as possible!  Need experiments to help broadcast information on this!  Some 20+ already active – target of 40 by September 2006!  Still many to bring up to speed – re-use experience of existing sites!  Important to understand key data flows  How experiments will decide which data goes where  Where does a Tier2 archive its MC data?  Where does it download the relevant Analysis data?  The models have evolved significantly over the past year!  Two-three day workshop followed by 1-2 days of tutorials Bringing remaining sites into play: Identifying remaining Use Cases

Track on Distributed Data Analysis DIAL ProdSys BOSS Ganga Analysis Systems PROOF CRAB Submission Systems DIRAC PANDA Bookkeeping Systems JobMon BOSS BbK Monitoring Systems DashBoard JobMon BOSS MonaLisa Data Access Systems xrootd SAM Miscellaneous Go4 ARDA Grid Simulations AJAX Analysis

Data Analysis Systems ALICEATLASCMSLHCb PROOFDIAL GANGA CRAB PROOF GANGA  All systems support, or plan to support, parallelism  Except for PROOF all systems achieve parallelism via job splitting and serial batch submission (job level parallelism)  The different analysis systems presented, categorized by experiment:

Classical Parallel Data Analysis Storage Batch farm queues manager outputs catalog  “Static” use of resources  Jobs frozen, 1 job / CPU  “Manual” splitting, merging  Limited monitoring (end of single job)  Possible large tail effects submit files jobs data file splitting myAna.C merging final analysis query From PROOF System by Ganis [98]

Interactive Parallel Data Analysis catalog Storage Interactive farm scheduler query  Farm perceived as extension of local PC  More dynamic use of resources  Automated splitting and merging  Real time feedback  Much better control of tail effects MASTER query: data file list, myAna.C files final outputs (merged) feedbacks (merged) From PROOF System by Ganis [98]