Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Grid Architecture Grid Canada Certificates International Certificates Grid Canada Issued over 2000 certificates Condor G Resource TRIUMF.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
Last year AND next year Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Brief introduction about “Grid at LNS”
UK GridPP Tier-1/A Centre at CLRC
Manchester HEP group Network, Servers, Desktop, Laptops, and What Sabah Has Been Doing Sabah Salih.
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Presentation transcript:

Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005

Linux PCs list from Stuart.

Windows PCs

And we have afs server Disk server BaBar farm DØ farm Grid farm(s) Macintosh farm Various laptops

This is not a strategy “57 varieties” computing architecture leads to inefficient use of CPU Disk space System manager time User time

Coming soon: the Tier2 centre GHz CPUs 500 Tb of disk We have to share it – but we can give ourselves priority Why do we want anything else? (Rhetorical question)

BaBar Analysis Event Data Large (Tb) ntuples Small (Gb) ntuples Final plots Tier 2 farm Local cluster This assumes the Grid is working so we can all use it But it must!

3 layer model Tier 2 centre. Batch processing. Local cluster. Powerful interactive facility and some storage. Used for compiling, linking, test jobs and final analysis. Desktop/laptop. Nice display. Connect to cluster (ssh, eXceed…) for anything intensive. Not a new idea! What SLAC and CERN do…

Points towards a strategy We want a group cluster. Not separate ATLAS and BaBar and DØ and SuperNEMO and Accelerator Science… clusters. Assemble and maintain ~15 high performance nodes as HEP cluster. Run lcg and SL3. (Migrate as needed) These are inside the firewall. The BaBar and DØ farms will wither away Little (or no) need for more data servers Everyone gets a desktop or a laptop. Linux desktop or PC? Macintosh? lcg UI? “Pretty but not powerful”

So we… Think and discuss Have another meeting in about a month (ask John Baines?) Keep September 21 st free