University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
SLUO LHC Workshop, SLACJuly 16-17, Analysis Model, Resources, and Commissioning J. Cochran, ISU Caveat: for the purpose of estimating the needed.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
ATLAS Analysis Model. Introduction On Feb 11, 2008 the Analysis Model Forum published a report (D. Costanzo, I. Hinchliffe, S. Menke, ATL- GEN-INT )
ATLAS users and SLAC Jason Nielsen Santa Cruz Institute for Particle Physics University of California, Santa Cruz SLAC Users Organization Meeting 7 February.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Tier 3g Infrastructure Doug Benjamin Duke University.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Spending Plans and Schedule Jae Yu July 26, 2002.
Tier 3 Computing Doug Benjamin Duke University. Tier 3’s live here Atlas plans for us to do our analysis work here Much of the work gets done here.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Scientific Computing in PPD and other odds and ends Chris Brew.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ELSSISuite Services QIZHI ZHANG Argonne National Laboratory on behalf of the TAG developers group ATLAS Software and Computing Week, 4~8 April, 2011.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Viewpoint from a University Group Bill Lockman with input from: Jim Cochran, Jason Nielsen, Terry Schalk WT2 workshop April 6-7, 2009.
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Database Replication and Monitoring
Belle II Physics Analysis Center at TIFR
Western Analysis Facility
Bernd Panzer-Steindel, CERN/IT
Computing Report ATLAS Bern
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ATLAS DC2 & Continuous production
The ATLAS Computing Model
Presentation transcript:

University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing Structure T3 types and comparisons Scorecard

My view of the ideal computing environment Full system support by a dedicated professional hardware and software (OS and file system) High bandwidth access to the data at desired level of detail e.g., ESD, AOD, summary data and conditions data Access to all relevant ATLAS software and grid services Access to compute cycles equivalent to purchased hardware Access to additional burst cycles Access to ATLAS software support when needed Conversationally close to those in same working group Preferentially face to face July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman2 These are my views, derived from discussions with Jason Nielsen, Terry Schalk UCSC), Jim Cochran (Iowa State), Anyes Taffard (UCI), Ray Frey, Eric Torrence (Oregon), Gordon Watts (Washington), Richard Mount, Charlie Young (SLAC)

ATLAS Computing Structure July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman3 ATLAS world-wide tiered computing structure where ~30 TB of raw data/day from ATLAS is reconstructed, reduced and distributed to the end user for analysis T0: CERN T1: 10 centers world wide. US: BNL. No end user analysis. T2: some end-user analysis 5 US Centers, 1 SLAC T3: end user universities and some national labs. See ATLAS T3 report: al/TierThree_v1_executiveFinal.pdf al/TierThree_v1_executiveFinal.pdf

Data Formats in ATLAS July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman FormatSize(MB)/evt RAW - data output from DAQ (streamed on trigger bits)1.6 ESD - event summary data: reco info + most RAW0.5 AOD - analysis object data: summary of ESD data0.15 TAG - event level metadata with pointers to data files0.001 Derived Physics Data ~25 kb/event ~30 kb/event ~5 kb/event 4

Possible data reduction chain July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman5 (possible scenario for “mature” phase of ATLAS experiment)

T3g T3g: Tier3 with grid connectivity (a typical university-based system): Tower or rack-based Interactive nodes Batch system with worker nodes Atlas code available (in kit releases) ATLAS DDM client tools available to fetch data (currently dq2-ls, dq2-get) Can submit grid jobs Data Storage located on worker nodes or dedicated file servers Possible activities: detector studies from ESD/pDPD, physics/validation studies from D 3 PD, fast MC, CPU intensive matrix element calculations,... July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman6

A university-based ATLAS T3g Local computing a key to producing physics results quickly from reduced datasets Analyses/streams of interest at the typical university: CPU and storage needed for first 2 years: July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman7 performance ESD/pDPD at T2 # analyses e-gamma1 W/Z(e)2 W/Z(  ) 2 minbias1 physics stream (AOD/D 1 PD) at T2 # analyse s e-gamma2 muon1 jet/missE t 1 components 160 cores 70 TB

A university-based ATLAS T3g Requirements matched by a rack-based system from T3 report:T3 report The university has a 10 Gb/s network to the outside. Group will locate the T3g near campus switch and interface directly to it July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman8 10 KW heat 320 kSI2K processing

Tier3 AF (Analysis Facility) Two sites expressed interest and have set up prototypes BNL: Interactive nodes, batch cluster, Proof cluster SLAC: Interactive nodes and batch cluster T3AF – University groups can contribute funds / hardware Groups are granted priority access to resources they purchased Purchase batch slots Remaining ATLAS may use resources when not in use by owners SLAC-specific case: Details covered in Richard Mount’s talk July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman9

University T3 vs. T3AF July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman10 Site:Advantages:Disadvantages: UniversityCooling, power, space usually provided Control over use of resources More freedom to innovate/experiment Dedicated CPU resource Potential matching funds from university Limited cooling, power, space and funds to scale acquisition in future years Support not 24/7, not professional. Cost may be comparable to that at T3AF Limited networking and networking support Access to databases No surge capability T3AF24/7 hardware and software support (mostly professional) Shared space for code, data (AOD) Excellent access to ATLAS data and databases Fair share mechanism to allow universities to use what they contributed Better network security ATLAS release installation provided A yearly buy in cost Less freedom to innovate/experiment by university Must share some cycles Some groups will site disks and/or worker nodes at T3AF, interactive nodes at university

Qualitative score card University T3gT3AF Full system support by a dedicated professionalnogenerally yes High bandwidth access to the data at desired level of detailvariablegood Access to all relevant ATLAS software and grid servicesvariableyes Access to compute cycles equivalent to purchased hardwareyes Access to additional burst cycles (e.g., crunch time analysis)generally notyes Access to ATLAS software support when neededgenerally yesyes Cost (hardware, infrastructure)some being negotiated being negotiated July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman11 Cost is probably the driving factor in hardware site decision hybrid options are also possible A T3AF at SLAC will be an important option for university groups considering a T3

Extra July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman12