3 H1: UK Monte-Carlo GGUS TICKETS 01.01.2010 – 01.04.2011 UKI-NORTHGRID-MAN-HEP2 RAL-LCG26 UKI-LT2-BRUNEL7 UKI-NORTHGRID-LANCS-HEP3 UKI-SOUTHGRID-BHAM-HEP2 UKI-LT2-QMUL6 UKI-LT2-RHUL4 UKI-NORTHGRID-LIV-HEP2 UKI-LT2-UCL-CENTRAL1 UK TOTAL (40%)33 Usually aborted jobs or unavailable SE. Most tickets solved by action in authentication/authorisation area. http://www-h1.desy.de/h1/www/h1mc/mts.html c/o Dave Sankey & Bogdan Lobodzinski
4 H1: UK Monte-Carlo 25% of all H1 jobs from 12 UK sites. UK-DESY transfer rate only ~500kB/sec (averaged over 2 months).
5 Linear Collider Silicon Detector (SiD) Design Study – tracking and calorimetry. ILC ~500 GeV. LOI published 31 March 2009 c/o Jan Strube Multi TeV (~3 TeV) e+e- collider Two beam accelerator
6 CLIC is in the middle of a conceptual design report (CDR) 5 different benchmarking channels at 3 TeV + 1 at 500 GeV Software needed a lot of work to move from 500 GeV to 3 TeV --> delays in the start of production 312 background events / signal events !!! Linear Collider: Status Obviously using DIRAC as well as Ganga
7 Current Storage used: RAL-SRM : 1,513,322,629,922 KEK-SRM : 27,825,900,796 CERN-SRM : 15,468,811,712,135 IN2P3-SRM : 1,362,953,567,143 Total : 18,372,913,809,996 Background merging step planned to be done at RAL. -> Significant increase expected, but no need to increase T1 allocation foreseen Linear Collider: Next Steps
8 CLIC CDR due in Fall 2011 Production just about to hit the hot phase. ILC detectors to write Detector Baseline Document: Due to end 2012. Last time: Production mostly at SLAC Batch farm. This time: Expected to exclusively use Grid (expected merger of DESY ILC VO w/ Fermigrid ILC VO would provide relief for Tier1). Total scale ~ 1/4 of CLiC total. Tier1 allocation at the current level should be sufficient. But limited UK manpower in the future (both Jan and Marcel depart). Linear Collider: Long Term
9 SuperB: Newly born? Re-use components from PEP-II and BaBar. UK interested in building the Silicon Tracker. MAPS pixel sensors and modules (~1m 2 of silicon). Opportunities also on accelerator side. UK Grid resources for computing model & leadership. Approved by Italian Government 14/15 December 2010. Three sites under consideration: two near Rome(LNF and Tor Vergata) or green field site.
10 SuperB: Computing Computing guesses 2020 (BaBar extrapolation) Nominal luminosity (15 ab -1 /year) TAPE100 PB DISK50 PB CPU1700 kHEPSPEC06 “Our efficiency is infinite since we are producing events with no resources”. Now - Fast Grid simulation for physics and detector studies. Run at QMUL, RAL and Oxford. Produced ~20% of Monte-Carlo. Babar code converted to run on Grid, but needs recoding. In next 2 years ramp up detector simulation. Want to get accelerator studies into computing model. c/o Fergus Wilson
11 SNO+ Neutrino Detector SNO+ is a multi-purpose liquid scintillator neutrino detector. Like SNO, but with heavy water returned and replaced. H 2 O in 2012 and liquid scintillator in 2013. More interaction light means more data. Need to store and process many TB of data. Monte-Carlo is CPU intensive due to many photons that need to be tracked. Looking to use Grid in Europe: Oxford, Sussex, QMUL, Liverpool and Lisbon. Early days – VO just set up. Probably using T2 sites for MC production and storage. c/o Jeanne Wilson
12 T2K Super-Kamiokande Kamioka mine (1000m deep) 50 kton water Cherenkov Long-baseline neutrino oscillation experiment (295km) Measure oscillation of ν µ to ν e Neutrino beam generated using 50 GeV proton synchrotron at J-PARC facility in Tokai.
13 T2K: A few other problems... via Geoff Pearce
14 T2K: Data Model c/o Ben Still 100% Raw Data at RAL, TRIUMF (IN2P3) sites. Weighted percentage at T2 sites dependent on resources. FTS server set up at RAL and UK channels working.
15 T2K: Grid Progress Now starting to heavily use the Grid for processing as well as distribution and storage of data. System in place to distribute new data from RAL and TRIUMF Tier 1 centres to Tier 2 sites in UK, Spain (Barcelona and Valencia) and France (IN2P3). Done via FTS with channels hosted at RAL. Issues with just using lcg-utils to copy data from Japan. Storage snapshot. RAL Tier 1 ~109TB. UK Tier 2 sites ~1- 40TB. Soon entering largest data processing phase – Grid will play major role. Team of 3 post-docs working on Grid related issues: Ben Still (QMUL), Gustav Wikstrom (Geneva) and Jon Perkin (Sheffield). c/o Ben Still
16 NA62 NA62-II (2013-2014): UK – Birmingham, Bristol, Glasgow, Liverpool. Short test end of 2012. Large MC run in Autumn 2011. VO created in UK, enabled at Glasgow. Grid interface (Janusz). c/o Dave B.
17 MICE MICE magnet delays, etc. Rescheduling. Some tests/runs in 2011. Next “Step” in 2012. Custodial copy of raw data (2 copies) at UK T1. Important that this is efficient! c/o Dave Colling
18 PhenoGrid: Health Warning! 1.Unlike large VOs, there is no dedicated support staff. Chasing problems takes academic time away from research. 2.Don’t have the ability to submit team tickets. Appears tickets get low priority because they are considered individual user (not VO) problems. Let to a number of issues in last 6 months taking a long time to resolve and Grid being unusable for much of that period. Hoping this will improve after interaction with Jeremy. 3.Stability of Grid service is still dismal. Too many potential sources of point failure leading to a total up time of the Grid being well below 50% in our opinion – rather than anticipated high 90%+. 4.Grid middleware considered to be a failure on a scale that would embarrass a government IT project. Still inconsistent, unstable and unreliable after ~10years of development. Fails the “Daily Mail” test. c/o Peter Richardson
20 Not to Forget... MINOS: Nick West and Phillip Rodrigues have left. Talked to Alfons Weber (UK Spokesman) – now little UK effort and expect limited UK Grid use. Some legacy issues over storage (NFS, etc). Fire in Soudan mine! Future neutrino experiment (NOvA)? SuperNemo: Gianfranco Sciacca has left. Not sure of status – no update on “health”. CDF and DZERO: Little UK Grid use now (none at T1). I did not consult.