U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.

Slides:



Advertisements
Similar presentations
GridPP9 – 5 February 2004 – Data Management DataGrid is a project funded by the European Union GridPP is funded by PPARC GridPP2: Data and Storage Management.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
ATLAS DQ2 Deletion Service D.A. Oleynik, A.S. Petrosyan, V. Garonne, S. Campana (on behalf of the ATLAS Collaboration)
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
PanDA Multi-User Pilot Jobs Maxim Potekhin Brookhaven National Laboratory Open Science Grid WLCG GDB Meeting CERN March 11, 2009.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
DDM-Panda Issues Kaushik De University of Texas At Arlington DDM Workshop, BNL September 29, 2006.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Computing Sciences Directorate, L B N L 1 CHEP 2003 Standards For Storage Resource Management BOF Co-Chair: Arie Shoshani * Co-Chair: Peter Kunszt ** *
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
Data Management: US Focus Kaushik De, Armen Vartapetian Univ. of Texas at Arlington US ATLAS Facility, SLAC Apr 7, 2014.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
WLCG Information System Use Cases Review WLCG Operations Coordination Meeting 18 th June 2015 Maria Alandes IT/SDC.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Dynamic Data Placement: the ATLAS model Simone Campana (IT-SDC)
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
User Support of WLCG Storage Issues Rob Quick OSG Operations Coordinator WLCG Collaboration Meeting Imperial College, London July 7,
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
Discussion on data transfer options for Tier 3 Tier 3(g,w) meeting at ANL ASC – May 19, 2009 Marco Mambelli – University of Chicago
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
PD2P, Caching etc. Kaushik De Univ. of Texas at Arlington ADC Retreat, Naples Feb 4, 2011.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
LHCONE Workshop Richard P Mount February 10, 2014 Concerns from Experiments ATLAS Richard P Mount SLAC National Accelerator Laboratory.
J. Shank DOSAR Workshop LSU 2 April 2009 DOSAR Workshop VII 2 April ATLAS Grid Activities Preparing for Data Analysis Jim Shank.
Evolution of storage and data management
Gene Oleynik, Head of Data Storage and Caching,
Status: ATLAS Grid Computing
Global Data Access – View from the Tier 2
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Data Challenge with the Grid in ATLAS
A full demonstration based on a “real” analysis scenario
Readiness of ATLAS Computing - A personal view
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Presentation transcript:

U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 2 Planning ahead …  Scope: ~6 months  Milestones and Items on the following slides are guided by Production and Analysis needs  U.S. ATLAS Computing Integration Program will translate them into the technical steps sites have to perform

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 3 Analysis  Analysis at Tier-2s q Implement analysis queues at all 5 Tier 2's o Complete by 15 December q Replicate all R13 AOD's (and some selected R12 AOD's) to all Tier 2's o How much space is needed at each Tier-2 o Who makes decision on datasets? o Who makes subscriptions and runs DDM operations? o Complete by 31 December  Demonstrate automatic redirection of analysis jobs to all Tier 2's by pathena  15 January

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 4 Analysis  Interactive Analysis q BNL PROOF farm available to all US ATLAS users for testing o Complete 31 January q BNL PROOF farm in production mode o Complete 31 March q Tier 2 PROOF farms available o Complete 30 June o Does this fit our model (User account management etc.)? q Support Tier-3 activities as part of the Computing Integration Meeting o Immediately, ongoing

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 5 Storage Services  At Tier-1 q Evaluate Pinning with SRM v2.2 o Complete by 21 December q Propose data placement plan for data at Tier 1, including pinning, disk only partitions etc o Complete by 31 December q Develop and deploy software necessary to mange pinned files o Complete 15 January q Disk space reconfiguration according to computing model o Complete 31 January  Develop and deploy disk-only dCache space management tools o Complete 21 December q User space management at Tier 1, including user management, cache cleanup o Proposal: Complete 31 December o Deployment: 31 January q LFC o Test system deployed: Complete 31 December o Test system production ready: 31 January o Migration to LFC completed for US, assuming successful tests: 28 February

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 6 U.S ATLAS Data  Data Management q Deploy storage quota system US ATLAS wide o Complete by 28 February q DQ2 data deletion fully operational o Complete by 15 December q Complete DQ2 lost file tagging for US o Complete 15 January  What is data flow model in Pathena?  What if researcher produces data at Tier3?  How is the decision to archive made?  Are Tier2’s expected to maintain precious data indefinitely?  User Data Lifetime?  Consistency Checks?

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 7 Operations & Performance  Incident tracking / Communication q Elog deployed & operational o Complete 15 December  Performance q Demonstrate 2007 WLCG pledge with 90% average efficiency o Complete 31 December q Demonstrate 90% 2008 WLCG pledge o Complete 30 June o Contingent to funding situation

28 November 2007 M. Ernst Tier-2/3 Meeting at SLAC 8 Next Meeting  Jointly with OSG All-Hands Meeting at RENCI (Chapel Hill, North Carolina)  March 2008