DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities.

Slides:



Advertisements
Similar presentations
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Introduction to the workshop LHCb Generators Tuning Mini Workshop Bucharest 22 nd & 23 rd November 2012 LHCb Generators Tuning Mini Workshop Bucharest.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
DESY Participation in an External Experiment Joachim Mnich PRC Meeting
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Spending Plans and Schedule Jae Yu July 26, 2002.
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
ATLAS Trigger Development
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Austrian Participation in ATLAS  Innsbruck University  FH Wiener Neustadt  Austrians in ATLAS at CERN RECFA meeting, Vienna, Emmerich Kneringer.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
A Search for Higgs Decaying to WW (*) at DØ presented by Amber Jenkins Imperial College London on behalf of the D  Collaboration Meeting of the Division.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Data Challenge with the Grid in ATLAS
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Simulation use cases for T2 in ALICE
ALICE Computing Upgrade Predrag Buncic
The DESY ATLAS Group The Group and ALFA Physics SLHC upgrade
Presentation transcript:

DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities

PRC 10/06Klaus Moenig2 A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group lead by J. Mnich evaluated the possibilities Both experiments (ATLAS & CMS) welcomed the participation of DESY In November 2005 DESY decided to join both ATLAS and CMS

PRC 10/06Klaus Moenig3 General considerations DESY will contribute a TIER2 centre for each experiment No contributions to detector hardware are foreseen for the moment Participating DESY physicists are meant to work 50% on LHC, 50% on other projects The groups start relatively small and will grow when HERA stops next year We profit from close collaboration with the DESY theory group (already 2 workshops on BSM (4/06) and SM (tomorrow) physics)

PRC 10/06Klaus Moenig4 The LHC schedule Machine closed: Aug. 07 Collisions at 900GeV: few days in Dec. 07 Open access: Jan. – Mar. 08 Collisions at 14 TeV: June 08 Possible luminosity by end 08: L = cm -2 s -1

PRC 10/06Klaus Moenig5 ATLAS The DESY-ATLAS group consists at present of 7 physicists and 4 PhD students Close collaboration with –IT-Hamburg and DV-Zeuthen –Uni Hamburg (1 Junior Professor) –Humboldt University Berlin Our tasks in ATLAS are usually common projects of DESY and the university groups Already 2 plenary talks at the ATLAS-D meeting in Heidelberg 9/2006

PRC 10/06Klaus Moenig6 Tasks in ATLAS Trigger –Trigger configuration –Simulation of the L1 central trigger processor –Trigger Monitoring –Event filter for Minimum Bias Events ATLAS software –Fast shower simulation –Coordination of event graphics –Contributions to core software

PRC 10/06Klaus Moenig7 Infrastructure DESY will provide a rack for the ATLAS- event filter (~ €) A test facility (5PCs + disks) to test trigger software in Zeuthen is being set up In Hamburg an offline facility (1 file server plus 2 nodes) is running ATLAS offline software is running in Hamburg and Zeuthen

PRC 10/06Klaus Moenig8 The ATLAS trigger 3 trigger levels –1 st level is hardware –2 nd and 3 rd (EF) level (HLT) are software DESY is mainly engaged in HLT However also some 1 st level items are treated

PRC 10/06Klaus Moenig9 Trigger configuration DESY and Uni HH contribute to the ATLAS trigger configuration Recent example: Automatic calculation of prescale factors during fill (nice synergy with H1 experience) Needs to be integrated with ATLAS online SW

PRC 10/06Klaus Moenig10 Trigger monitoring DESY+HUB are responsible for the monitoring of the ATLAS trigger This includes the presenter for the full trigger monitoring First parts of the monitoring exist A proposal for the presenter is being discussed

PRC 10/06Klaus Moenig11 Fast shower simulation Full simulation is very slow and fast simulation not accurate enough A compromise can be a fast parameterization of electromagnetic showers This has been pioneered by H1 We try to inject this experience into ATLAS Collaboration with SLAC and Melbourne

PRC 10/06Klaus Moenig12 Physics analysis Interest in Supersymmetry, b-physics, QCD, forward physics Only 1 st steps up to now Exploit synergies with DESY theory group Development of an analysis framework together with CERN Large scale n-tuple production ATLAS-D physics meeting 07 in Zeuthen 2-lepton analysis Before cuts After cuts

PRC 10/06Klaus Moenig13 CMS The DESY CMS group consists of 5 physicists 1 software engineer and 3 PhD students There is close collaboration with the Uni Hamburg group who is CMS member since a couple of years Areas of contribution: –Higher level trigger –Computing –Technical coordination (deputy technical coordinator from DESY)

PRC 10/06Klaus Moenig14 HLTSupervisor software CMS has a 2 level trigger –Level1 clocked at crossing rate (24ns), accept rate 100kHz –HLT is a O(2000) PC farm running filter offline software, output rate 100Hz HLTS requirements –Read trigger table from DB and store HLT tag to condDB –Distribute trigger table to FU before start of run –Collect statistics about L1 and HLT rates and efficiencies –Control dynamic parameters such as prescalers Currently concentrate to get prescaler handling working: –Test system setup on machines at DESY using CMS software –System working, with this software, since September

PRC 10/06Klaus Moenig15 Current status of HLT supervisor work Test system functionally complete and works –Needs integration with HLT development at CERN Next steps –Interface to online DB –Interface Level 1 information –Other HLTS requirements Address the above at the HLT workshop 30.Oct.2006 at CERN

PRC 10/06Klaus Moenig16 CMS Computing Goals Computing: continuing to work towards an environment where users do not perceive that the underlying computing fabric is a distributed collection of sites. –Most challenging part in the distributed system: some site is always having some problem –Large progress in understanding and reducing the problems while working at a scale comparable to CMS running conditions 2006 is “Integration Year” –Tie the many sites with common tools into a working system –Coordination: Michael Ernst (DESY) and Ian Fisk (FNAL) 2007 will be the “Operation Year” –Achieve smooth operation with limited manpower, increase efficiency, complete automated failure recovery while growing by factor 2 or 3 in scale

PRC 10/06Klaus Moenig17 Test Tier0-Tier1/2 transfers at 2008 rates More then 3PB of data transferred by CMS in 3 months Over 300MB/sec peak from CERN to Tier1’s 91 days before Aug Aug 2006 Over 100MB/sec from CERN to DESY for 10 days

PRC 10/06Klaus Moenig18 Monte-Carlo Production for Computing Software and Analysis Challenge (CSA06) Monte Carlo production for CSA06 started in July Aim (originally): produce 50M events (25M minimum bias + signal channels) to use as input for prompt reconstruction for CSA06 4 teams incl. Aachen/DESY managed worldwide production running exclusively on the grid About 60 Million of events produced in less than 2 months –Continue to produce 10M events per month during CSA06 and beyond

PRC 10/06Klaus Moenig19 DESY Contribution to worldwide MC Production on the Grid Number of Jobs Production Sites

PRC 10/06Klaus Moenig20 Physics analysis J. Mnich is CMS SM convener and has edited the corresponding chapter of the physics TDR Main interest: top-pair production - tt production - LO/NLO comparison - spin correlation Example: Spin (=decay angle) correlations are sensitive to production process (gg fusion or qqbar annihilation)

PRC 10/06Klaus Moenig21 The TIER 2 centre An average size TIER 2 centre for ATLAS and CMS will be the main hardware contribution to the experiments DESY has already substantial experience running Grid for HERA exp., Theory, IceCube and ILC. At present these experiments run concurrently with LHC

PRC 10/06Klaus Moenig22 Tier 2 centres for Germany # T2s total Author#T2s in Germany Atlas3010%3 T2s CMS255%1,5 T2s Proposed Tier 2s in Germany: DESY: Federation with RWTH Aachen for 1,5 av. Tier 2 for CMS 1 av. Tier 2 for Atlas (MPG/LMU) & (Univ. Wuppertal/Univ. Freiburg) 1 Tier 2 each for Atlas N.B.: attached Tier 3‘s on all Tier 2‘s

PRC 10/06Klaus Moenig23 Revised hardware resources plan (C=CTDR, N=New) for a average Atlas Tier 2 (Ass.: 30 Tier 2‘s) CPU [kSI2k]80 N 700 C (Quast 05) 580 N 900 C (Quast 05) 900 N 900 C (Quast 05) 1720 N 1670 C 2300 N 2200 C 2900 N 2800 C Disk [TB]45 N 340 C (Quast 05) 260 N 340 C (Quast 05) 440 N 570 C (Quast 05) 740 N 800 C 1040 N 1140 C 1300 N 1470 C Tape [TB] (?)

PRC 10/06Klaus Moenig24 Revised hardware resources plan (C=CTDR, N=New) for a average CMS Tier 2 (Ass.: 25 Tier 2‘s) CPU [kSI2k]300 N 400 C 600 N 900 C 1000 N 1400 C 1810 N 2300 C Disk [TB]50 N 100 C 170 N 200 C 340 N 400 C 530 N 700 C Tape [TB] (?)

PRC 10/06Klaus Moenig25 Revised hardware resources plan ramp up (total for DESY) CPU [kSI2k] Disk [TB] Tape [TB] (?)

PRC 10/06Klaus Moenig26 The new computer room

PRC 10/06Klaus Moenig27 A full production ATLAS: Jobs/ h (CPU wall clock) CMS: Jobs / h (CPU wall clock) DCMS: 3715 Jobs / h (CPU wall clock) CPU-time used at DESY (Jan 1st – Sept. 30th) ~ 50% of the DESY grid farm (Hamburg&Zeuthen)

PRC 10/06Klaus Moenig28 Constant contribution from DESY Tier 2 (!) to datatransfer DESY

PRC 10/06Klaus Moenig29 Current Tier 2 Production Environment (HH&Zeuthen) Hardware: ~ 230 WNs ( ~700 kSpecInt), –Storageelement : ~100 TB disk, 1.5 PB tape Middleware/OS –Full working grid environment –gLite / SL3.0.5 Upgrades Q4/06: –42 WNs (~ 300 kSpecInt) (external money) –120 TB disk (external money) –10GBit/s Link Hamburg-Zeuthen Overall DESY is with the Tier 2 well on track!

PRC 10/06Klaus Moenig30 Conclusions DESY joined ATLAS & CMS early this year We make already significant contributions to both experiments Also the DESY-TIER 2 is well on track Further strengthening of the groups is expected after HERA shutdown We look forward to many beautiful data to come