8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
11th December 2002Tim Adye1 BaBar UK Grid Work Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting SLAC 11 th December 2002.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
13th November 2002Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting University of Bristol 13 th November.
13 December 2000Tim Adye1 New KanGA Export Scheme Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Data Distribution Session 13 th December.
22nd January 2003Tim Adye1 Summary of Bookkeeping discussions at RAL Workshop Tim Adye Rutherford Appleton Laboratory Kanga Phone Meeting 22 nd January.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
BaBarGrid: Some UK developments Roger Barlow Imperial College 13th September 2002.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
1 Use of the European Data Grid software in the framework of the BaBar distributed computing model T. Adye (1), R. Barlow (2), B. Bense (3), D. Boutigny.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
BaBar WEB job submission with Globus authentication and AFS access T. Adye, R. Barlow, A. Forti, A. McNab, S. Salih, D. H. Smith on behalf of the BaBar.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
2nd April 2001Tim Adye1 Bulk Data Transfer Tools Tim Adye BaBar / Rutherford Appleton Laboratory UK HEP System Managers’ Meeting 2 nd April 2001.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
BaBar Data Distribution using the Storage Resource Broker Adil Hasan, Wilko Kroeger (SLAC Computing Services), Dominique Boutigny (LAPP), Cristina Bulfon.
Data Distribution and Management Tim Adye Rutherford Appleton Laboratory BaBar Computing Review 9 th June 2003.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
SkimData and Replica Catalogue Alessandra Forti BaBar Collaboration Meeting November 13 th 2002 skimData based replica catalogue RLS (Replica Location.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
GridPP Collaboration Meeting 5 th November 2001 Dan Tovey, University of Sheffield Non-LHC and Non-US-Collider Experiments’ Requirements Dan Tovey, University.
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
26 September 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 26 th September 2000.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
Andrew McNabGrid in 2002, Manchester HEP, 7 Jan 2003Slide 1 Grid Work in 2002 Andrew McNab High Energy Physics University of Manchester.
11th November 2002Tim Adye1 Distributed Analysis in the BaBar Experiment Tim Adye Particle Physics Department Rutherford Appleton Laboratory University.
11th April 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Liverpool 11 th April 2003.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
15 December 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 15 th December 2000.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
BaBar-Grid Status and Prospects
Moving the LHCb Monte Carlo production system to the GRID
INFN-GRID Workshop Bari, October, 26, 2004
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Using an Object Oriented Database to Store BaBar's Terabytes
Gridifying the LHCb Monte Carlo production system
Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary
Measuring the B-meson’s brief but eventful life
Presentation transcript:

8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002

Tim Adye2 Talk Plan BaBar distributed computing model RAL Tier A Remote job submission BaBar VO and Authorisation Metadata Data distribution

8th November 2002Tim Adye3 The BaBar Collaboration 9 Countries 74 Institutions 566 Physicists

PEP-II e + e - Ring and BaBar Detector BaBar May 26, 1999: 1st events recorded by BaBar LER (e +, 3.1 GeV), I(e + )=2.1 A HER (e -, 9.0 GeV), I(e - )=1.0 A Linear Accelerator PEP-II ring: C=2.2 km B A B AR

8th November 2002Tim Adye5 BaBar’s Distributed Computing Model Goal is to spread computing load much more around the collaboration Simulation production is already distributed – 75% in the UK! Now have three new “Tier A” centres Lyon – Objectivity (database) analysis (since last year) RAL – Kanga (ROOT microDST) analysis (from May 2002) Padova – Reprocessing (just starting) Also several “Tier C” sites (ie. Universities, 9 in UK) Analysis data format (Kanga vs Objectivity) is a matter of heated debate at the moment Whatever the future of Objectivity, Kanga (championed in UK/Germany) looks set to continue

8th November 2002Tim Adye6 RAL Tier A UK MoU with BaBar reduces our common fund contributions in exchange for providing Tier A facility RAL has now relieved SLAC of all Kanga analysis Impressive takeup from UK and non-UK users See Andrew’s talk It is the primary repository of Kanga data ~20 TB on disk BaBar analysis environment tries to mimic SLAC so external users feel at home Grid job submission should greatly reduce this requirement

8th November 2002Tim Adye7 Remote Job Submission Short term (this month!) Allow SLAC or University users to submit BaBar analysis jobs to RAL or Lyon Tier A sites from their home machines dg-job-submit Simplifies local development and debugging, while providing access to full dataset and large CPU farms RAL vs IN2P3 selected explicitly by user “canned” JDL Requirements ; dataset selection left to user Why couldn’t we do this a year ago? BaBar authorisation (see later) Gatekeeper needed to be able to submit to production farm Define which BaBar configuration files to send with job Developed a procedure to merge all tcl files into one Resource Broker reliability – better with EDG 1.2.

8th November 2002Tim Adye8 Remote Job Submission Medium term (early next year) Allow remote submission to UK Farms and SLAC In principle this is already set up Select site (CE) based on user requirements Eg. Dataset available, software release, etc. Split job between sites based on available datasets Already have demonstrator for a canned analysis job

8th November 2002Tim Adye9 BaBar VO and Authorisation Use certificates from EDG and ESnet CAs for authentication Authorisation required to identify BaBar users Provides access to BaBar-specific facilities and environment Cannot maintain grid-mapfile by hand Doesn’t scale to users Use existing SLAC BaBar user registration User provides certificate id at SLAC Automatic procedure checks AFS group and fills VO CEs use VO for authorisation Naturally handles people leaving the experiment

8th November 2002Tim Adye10 Analysis Metadata Currently have about a million Kanga files in a deep directory tree Need a catalogue to facilitate data distribution and allow analysis datasets to be defined. SQL database Locates ROOT files associated with each dataset Selections based on decay channel, run range, beam energy, reconstruction processing version, etc. Each site has its own (MySQL or Oracle) database Includes a copy of SLAC database with local information (eg. files on local disk, files to import, local tape backups) Some use of SRB for local Objectivity metadata at SLAC and Lyon

8th November 2002Tim Adye11 Data Distribution Kanga and Objectivity distribution currently handled by homegrown procedures Use bbftp. bbcp soon. Will look at GridFTP Next step is to run transfers using Grid job submission Web control pages under development Authorisation done using Grid certificates Looking at SRB and RLS for data distribution

8th November 2002Tim Adye12 Summary BaBar already has a highly distributed analysis environment RAL Tier A saves BaBar! Want to use Grid job submission tools – now Looking at SRB and RLS