Summary of SC4 Disk-Disk Transfers LCG MB, April 18 2006 Jamie Shiers, CERN.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
The LCG Service Challenges: Experiment Participation Jamie Shiers, CERN-IT-GD 4 March 2005.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
WLCG/8 July 2010/MCSawley WAN area transfers and networking: a predictive model for CMS WLCG Workshop, July 7-9, 2010 Marie-Christine Sawley, ETH Zurich.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
WLCG Service Report ~~~ WLCG Management Board, 27 th October
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
WLCG Service Report ~~~ WLCG Management Board, 24 th November
CCRC08-1 report WLCG Workshop, April KorsBos, ATLAS/NIKHEF/CERN.
John Gordon STFC-RAL Tier1 Status 9 th July, 2008 Grid Deployment Board.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
WLCG Service Report ~~~ WLCG Management Board, 1 st September
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
The Worldwide LHC Computing Grid WLCG Service Ramp-Up LHCC Referees’ Meeting, January 2007.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
Andrea Sciabà CERN CMS availability in December Critical services  CE, SRMv2 (since December) Critical tests  CE: job submission (run by CMS), CA certs.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Alberto Aimar CERN – LCG1 Reliability Reports – May 2007
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
WLCG Service Report ~~~ WLCG Management Board, 16 th December 2008.
SC4 Planning Planning for the Initial LCG Service September 2005.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
WLCG Service Report ~~~ WLCG Management Board, 7 th September 2010 Updated 8 th September
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
The LHC Computing Environment Challenges in Building up the Full Production Environment [ Formerly known as the LCG Service Challenges ]
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
WLCG Service Report ~~~ WLCG Management Board, 18 th September
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
HEPiX Rome – April 2006 The High Energy Data Pump A Survey of State-of-the-Art Hardware & Software Solutions Martin Gasthuber / DESY Graeme Stewart / Glasgow.
WLCG Service Report ~~~ WLCG Management Board, 9 th February
WLCG Service Report Jean-Philippe Baud ~~~ WLCG Management Board, 24 th August
LCG Service Challenges: Progress Since The Last One –
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Planning the Planning for the LCG Service Challenges Jamie Shiers, CERN-IT-GD 28 January 2005.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
WLCG ‘Weekly’ Service Report ~~~ WLCG Management Board, 19 th August 2008.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
The LHC Computing Environment
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
The LCG Service Challenges: Ramping up the LCG Service
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
LHC Data Analysis using a worldwide computing grid
IPv6 update Duncan Rand Imperial College London
The LHCb Computing Data Challenge DC06
Presentation transcript:

Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN

The LHC Computing Grid – (The Worldwide LCG) Goals of SC4 Disk-Disk Transfers  Disk - disk Tier0-Tier1 tests at the full nominal rate are scheduled for April.  The proposed schedule is as follows:  April 3rd (Monday) - April 13th (Thursday before Easter) - sustain an average daily rate to each Tier1 at or above the full nominal rate. (This is the week of the GDB + HEPiX + LHC OPN meeting in Rome...)GDBHEPiXLHC OPN  Any loss of average rate >= 10% needs to be:  accounted for (e.g. explanation / resolution in the operations log)  compensated for by a corresponding increase in rate in the following days  We should continue to run at the same rates unattended over Easter weekend ( April).

The LHC Computing Grid – (The Worldwide LCG) Achieved (Nominal) pp data rates (SC3++) CentreALICEATLASCMSLHCbRate into T1 (pp) Disk-Disk (SRM) rates in MB/s ASGC, Taipei- -80 (100) (have hit 140) CNAF, Italy 200 PIC, Spain- >30 (100) (network constraints) IN2P3, Lyon 200 GridKA, Germany 200 RAL, UK- 200 (150) BNL, USA (200) FNAL, USA-- ->200 (200) TRIUMF, Canada (50) SARA, NL (150) Nordic Data Grid Facility (50) Meeting or exceeding nominal rate (disk – disk) Met target rate for SC3 (disk & tape) re-run Missing: rock solid stability - nominal tape rates SC4 T0-T1 throughput goals: nominal rates to disk (April) and tape (July) (Still) To come: Srm copy support in FTS; CASTOR2 at remote sites; SLC4 at CERN; Network upgrades etc.

The LHC Computing Grid – (The Worldwide LCG)

SC4 Blog  18/04 01:30 Easter Monday has been above 1.4 GB/s for most of the day, averaging about 1.5 GB/s, peaking at 1.6 GB/s right after the CNAF channel was switched on again. The problems of the day were with the CNAF channel, which was off except for a 6-hour spell, and with the BNL channel, which experienced many errors that were not very much reflected in the rate until 22:00 GMT. NDGF experimenting with dCache parameters. Maarten  17/04 02:30 Easter Sunday was the first full day averaging 1.6 GB/s! All channels were stable except for CNAF, whose LSF queue got full with stuck jobs a few times, requiring admin interventions. Maarten  16/04 03:50 A rocky night with…

The LHC Computing Grid – (The Worldwide LCG) Effect of FTS DB Cleanup

The LHC Computing Grid – (The Worldwide LCG) 24 hours since DB Cleanup

The LHC Computing Grid – (The Worldwide LCG) Easter Sunday: > 1.6GB/s including DESY GridView reports MB/s as daily average for 16-04/2006

The LHC Computing Grid – (The Worldwide LCG) Site by site plots

The LHC Computing Grid – (The Worldwide LCG) Site by Site Detail

The LHC Computing Grid – (The Worldwide LCG) Next targets (backlog) CentreALICEATLASCMSLHCbRate into T1 (pp) Disk-Disk (SRM) rates in MB/s ASGC, Taipei CNAF, Italy 300 PIC, Spain- 150 IN2P3, Lyon 300 GridKA, Germany 300 RAL, UK- 225 BNL, USA FNAL, USA TRIUMF, Canada SARA, NL Nordic Data Grid Facility --75 Need to vary some key parameters (filesize, number of streams etc) to find sweet spot / plateau. Needs to be consistent with actual transfers during data taking

The LHC Computing Grid – (The Worldwide LCG) Summary  We did not sustain a daily average of 1.6MB/s out of CERN nor the full nominal rates to all Tier1s for the period  Just under 80% of target in week 2  Things clearly improved --- both since SC3 and during SC4:  Some sites meeting the targets!  Some sites ‘within spitting distance’ – optimisations? Bug-fixes?  See blog for CNAF castor2 issues for example…  Some sites still with a way to go…  “Operations” of Service Challenges still very heavy  Special thanks to Maarten Litmaath for working > double shifts…  Need more rigour in announcing / handling problems, site reports, convergence with standard operations etc.  Remember the “Carminati Maxim”: ”If its not there for SC4, it won’t be for the production service in October…” (and vice-versa)

The LHC Computing Grid – (The Worldwide LCG) Postscript  Q: Well, 80% of target is pretty good isn’t it?  A: Yes and no.  We are still testing a simple (in theory) case.  How will this work under realistic conditions, including other concurrent (and directly related) activities at the T0 and T1s?  See Bernd’s tests…  We need to be running comfortably at 2-2.5GB/s day-in, day-out and add complexity step by step as things become understood and stable.  And anything requiring >16 hours of attention a day is not going to work in the long term…