Interfacing Grid-Canada to LCG M.C. Vetterli, R. Walker Simon Fraser Univ. Grid Deployment Area Mtg August 2 nd, 2004.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Workload management Owen Maroney, Imperial College London (with a little help from David Colling)
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
M.C. Vetterli – WLCG-OB, CERN; October 27, 2008 – #1 Simon Fraser Status of the WLCG Tier-2 Centres M.C. Vetterli Simon Fraser University and TRIUMF WLCG.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Grid in action: from EasyGrid to LCG testbed and gridification techniques. James Cunha Werner University of Manchester Christmas Meeting
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Grid Architecture Grid Canada Certificates International Certificates Grid Canada Issued over 2000 certificates Condor G Resource TRIUMF.
Ian Gable University of Victoria/HEPnet Canada 1 GridX1: A Canadian Computational Grid for HEP Applications A. Agarwal, P. Armstrong, M. Ahmed, B.L. Caron,
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
SAMGrid – A fully functional computing grid based on standard technologies Igor Terekhov for the JIM team FNAL/CD/CCF.
Particle Physics and the Grid Randall Sobie Institute of Particle Physics University of Victoria Motivation Computing challenge LHC Grid Canadian requirements.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Randall Sobie BCNET Annual Meeting April 24,2002 The Grid A new paradigm in computing Randall Sobie Institute of Particle Physics University of Victoria.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Grid Canada CLS eScience Workshop 21 st November, 2005.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
Large Hadron Collider and ATLAS in Canada First beam circulation 10 September 2008 Rob McPherson Canadian Institute of Particle Physics and the University.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
DataGrid is a project funded by the European Union VisualJob Demonstation EDG 1.4.x 2003 The EU DataGrid How the use of distributed resources can help.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LCG / ARC Interoperability Status Michael Grønager, PhD (UNI-C / NBI) January 19, 2006, Uppsala.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Ashok Agarwal University of Victoria 1 GridX1 : A Canadian Particle Physics Grid A. Agarwal, M. Ahmed, B.L. Caron, A. Dimopoulos, L.S. Groer, R. Haria,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
TRIUMF a TIER 1 Center for ATLAS Canada Steven McDonald TRIUMF Network & Computing Services iGrid 2005 – San Diego Sept 26 th.
Dan Tovey, University of Sheffield User Board Overview Dan Tovey University Of Sheffield.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
SC4 Planning Planning for the Initial LCG Service September 2005.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
U.S. ATLAS Computing Facilities (Overview) Bruce G. Gibbard Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Accounting in LCG/EGEE Can We Gauge Grid Usage via RBs? Dave Kant CCLRC, e-Science Centre.
The ATLAS Strategy for Distributed Analysis on several Grid Infrastructures D. Liko, IT/PSS for the ATLAS Distributed Analysis Community.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
The ALICE Production Patricia Méndez Lorenzo (CERN, IT/PSS) On behalf of the ALICE Offline Project LCG-France Workshop Clermont, 14th March 2007.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Eleonora Luppi INFN and University of Ferrara - Italy
Data Challenge with the Grid in ATLAS
UK Testbed Status Testbed 0 GridPP project Experiments’ tests started
An introduction to the ATLAS Computing Model Alessandro De Salvo
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

Interfacing Grid-Canada to LCG M.C. Vetterli, R. Walker Simon Fraser Univ. Grid Deployment Area Mtg August 2 nd, 2004

M.C. Vetterli; SFU/TRIUMF The Canadian Model  Establish a large computing centre at TRIUMF that will be on the LCG and will participate in the common tasks associated with Tier-1 and Tier-2 centres.  Canadian groups will use existing university facilities (or what they will become) to do physics analysis. They will access data and the LCG through TRIUMF.  The jobs are smaller at this level and can be more easily integrated into shared facilities. We can also be independent of LCG middleware. In this model, the TRIUMF centre acts as the hub of the Canadian computing network, and as an LCG node

TRIUMF Gateway cpu/storage Experts - MC data - ESD’ - calibration - access to CDN Grid - algorithms - calibration - MC production - access to ATLAS Grid - AOD - DPD - technical expertise The ATLAS-Canada Computing Model CA*Net4 USA, Germany, France, UK, Italy, … CERN ATLAS Grid Canadian Grid - ESD - access to RAW & ESD UVic, SFU, UofA, UofT, Carleton, UdeM(CFI funded)

M.C. Vetterli; SFU/TRIUMF What are we doing now?  Participate in Data Challenges  Prototype the TRIUMF centre  Establish LCG centres in Canada: TRIUMF, Alberta, Toronto, Carleton, and Montreal are all working  Set up a Canadian Grid test-bed (Grid-Canada PG1)  Develop middleware to interface between GC-PG1 and LCG

M.C. Vetterli; SFU/TRIUMF Federated Grids for ATLAS DC2 Grid-Canada PG-1 WestGrid LCG/WestGrid SFU/TRIUMF LCG/Grid-Can LCG In addition to LCG resources in Canada

M.C. Vetterli; SFU/TRIUMF Linking HEPGrid to LCG..... GC Res.1 GC Res.n Grid-Can negotiator/ collector WG UBC/ TRIUMF TRIUMF TRIUMF cpu & storage lcgce02: Condor-G lcgce01 LCG BDII/RB/ scheduler Class ad 1) Each GC resource publishes a class ad to the GC collector 3) The TRIUMF information provider aggregates this info into a single resource Class ad 2) The same is done for WG 3) The CondorG job manager at TRIUMF builds a submission script for the TRIUMF Grid 4) The GC negotiator matches the job to GC or WG 1) The LCG RB decides where to send the job (GC/WG or the TRIUMF farm) Job class ad 2) Job goes to the TRIUMF farm or GC decides to send the job to WG 5) The job is submitted to the proper resource MDS 4) TRIUMF passes this information on to LCG through CE02 5) TRIUMF also publishes its own resources separately through CE01  Load balancing GC decides to send the job to its own resources

M.C. Vetterli; SFU/TRIUMF Summary  Canada is participating in LCG through “normal” LCG sites  We have also integrated the Grid-Canada test bed through a separate CE at TRIUMF  The Grid interface uses Condor-G to re-submit LCG jobs  The interface has been fully tested and has now been running in production for a few days  Rod Walker will now give more technical details