1 The DataGrid WorkPackage 8 F.Carminati 28 June 2001.

Slides:



Advertisements
Similar presentations
Stephen Burke - WP8 Status - 9/5/2002 Partner Logo WP8 Status Stephen Burke, PPARC/RAL.
Advertisements

CMS HLT production using Grid tools Flavia Donno (INFN Pisa) Claudio Grandi (INFN Bologna) Ivano Lippi (INFN Padova) Francesco Prelz (INFN Milano) Andrea.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Workload Management Massimo Sgaravatto INFN Padova.
Status of Globus activities within INFN (update) Massimo Sgaravatto INFN Padova for the INFN Globus group
Application Use Cases NIKHEF, Amsterdam, December 12, 13.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
SLICE Simulation for LHCb and Integrated Control Environment Gennady Kuznetsov & Glenn Patrick (RAL) Cosener’s House Workshop 23 rd May 2002.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
Grid Workload Management Massimo Sgaravatto INFN Padova.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
28 March 2001F Harris LHCb Software Week1 Overview of GGF1 (Global Grid Forum) and Datagrid meeting, NIKHEF, Mar 5-9 F Harris(Oxford)
Globus Toolkit Massimo Sgaravatto INFN Padova. Massimo Sgaravatto Introduction Grid Services: LHC regional centres need distributed computing Analyze.
J.J.Blaising April 02AMS DataGrid-status1 DataGrid Status J.J Blaising IN2P3 Grid Status Demo introduction Demo.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
1 Application status F.Carminati 11 December 2001.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
EC Review – 01/03/2002 – F.Carminati – Accomplishments of the project from the end user point of view– n° 1 Accomplishments of the project from the end.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Bob Jones EGEE Technical Director
Workload Management Workpackage
BaBar-Grid Status and Prospects
The EDG Testbed Deployment Details
Moving the LHCb Monte Carlo production system to the GRID
ALICE Physics Data Challenge 3
US ATLAS Physics & Computing
Wide Area Workload Management Work Package DATAGRID project
Gridifying the LHCb Monte Carlo production system
Status and plans for bookkeeping system and production tools
Short to middle term GRID deployment plan for LHCb
Presentation transcript:

1 The DataGrid WorkPackage 8 F.Carminati 28 June 2001

2FOCUSJune 28, 2001 Distributed computing Basic principle Physicists must have in principle equal access to data and resources The system will be extremely complex Number of sites and components in each site Different tasks performed in parallel: simulation, reconstruction, scheduled and unscheduled analysis Bad news is that the basic tools are missing Distributed authentication & resource management Distributed namespace for files and objects Local resource management of large clusters Data replication and caching WAN/LAN monitoring and logging Good news is that we are not alone These issues are central to the developments in the US and in Europe under the collective name of GRID

3FOCUSJune 28, 2001 The GRID “Dependable, consistent, pervasive access to [high-end] resources” Dependable: Can provide performance and functionality guarantees Consistent: Uniform interfaces to a wide variety of resources Pervasive: Ability to “plug in” from anywhere

4FOCUSJune 28, 2001 GLOBUS hourglass Focus on architecture issues Low participation cost, local control and support for adaptation Use to construct high-level, domain-specific solutions A set of toolkit services Security (GSI) Resource management (GRAM) Information services (MDS) Remote file management (GASS) Communication (I/O, Nexus) Process monitoring (HBM) Diverse global services Core Globus services Local OS A p p l i c a t i o n s

5FOCUSJune 28, 2001 DataGRID project Huge project 21 partners, 10MEuro, 3 years, 12 WorkPackages Global design still evolving An ATF has been mandated to design the architecture of the system to be delivered at PM6 (=now!) Continuous feed-back from users needed Users are in three workpackages HEP in WP8 Earth Observation in WP9 Biology in WP10

6FOCUSJune 28, 2001 DataGRID Project WP1 Grid Workload Management 11/18.6 WP2 Grid Data Management 6/17.6 WP3 Grid Monitoring Services 7/10 WP4 Fabric Management5.7/15.6 WP5 Mass Storage Management 1/5.75 WP6 Integration Testbed 6/27 WP7 Network Services0.5/9.4 WP8 High Energy Physics Applications1.7/23.2 WP9 Earth Observation Science Applications3/9.4 WP10 Biology Science Applications1.5/6 WP11 Dissemination and Exploitation1.2/1.7 WP12 Project Management2.5/2.5

7FOCUSJune 28, 2001 WP8 mandate Partners CNRS, INFN, NIKHEF, PPARC, CERN Coordinate exploitation of testbed by the HEP experiments Installation kits have been developed by all experiments Coordinate interaction of the experiments with the other WP’s and the ATF Monthly meetings of the WP8 TWG Common feedback to WP’s and to ATF Identify common components that could be integrated in an HEP “upper middleware” layer Large unfunded effort (776 m/m ~ 22 people) Very small funded effort (60 m/m ~ 1.5 people)

8FOCUSJune 28, 2001 The multi-layer hourglass OS & Net services Bag of Services (GLOBUS) DataGRID middleware PPDG, GriPhyn, EuroGRID HEP VO common application layer Earth Obs.Biology ALICEATLASCMSLHCb Specific application layer WP9WP 10 GLOBU S team DataGRI D ATF WP TWG

9FOCUSJune 28, 2001 CPU(SI95) DSK(GB) TP(TB) NWK(Mb/s) People Bari Birming (RAL?) ? 1 Bologna Cagliari Catania CERN (CASTOR) 2 Dubna GSI (D) ? (robot) 34 2 IRB (Kr) Lyon (HPSS) Merida (MX) NIKHEF (NL) ? (robot) OSU (US) (HPSS) Padova Saclay ? ? - ? 1 Torino UNAM (MX) ALICE Site list & Resources

10FOCUSJune 28, 2001 ALICE – Testbed Preparation ALICE Installation KIT (M. Sitta) List of ALICE Users (& CERTIFICATES) Distributed production test (R. Barbera) HPSS/CASTOR Storage (Y. Schutz) Logging/Bookkeeping (Y. Schutz) Input from stdout & stderr Store in MySQL/ORACLE Integration in DataGRID WP6 6/2001: L ist of ALICE users (& Certificates) to WP6 7/2001: Distributed test with bookkeeping system 8/2001: Distributed test with PROOF (reconstruction) 9/2001: Test with DataGRID WP6 resources 12/2001: Test M9 GRID Services release (preliminary)

11FOCUSJune 28, 2001 ALICE distributed analysis model Local Remote Selection Parameters Procedure Proc.C PROOF CPU TagD B RD B DB 1 DB 4 DB 5 DB 6 DB 3 DB 2 Bring the KB to the PB and not the PB to the KB

12FOCUSJune 28, 2001 AliRoot - DataGRID Services Reconstruction & Analysis for the ALICE Physics Performance Report will be the short term use case ALICE Data Catalog being developed data selection (files, to become objects soon TGlobus class almost completed interface to access GLOBUS security from ROOT TLDAP class being developed interface to access IMS from ROOT Parallel Socket Classes (TPServer, TPServerSocket) TFTP Class, root deamon modified parallel data transfer

13FOCUSJune 28, 2001 ATLAS Grid Activities - 1 ATLAS physicists are involved, and play relevant roles in the EU and US grid projects started in the last year EU DataGrid, Japan also active, US GriPhyN and PPDG The Computing Model is based on Tier-1, Tier-2 and local Tier-3 connected using grid Data Challenges planned for testing and tuning of the grid and the Model Till now work has been done for requirements and use case collection first tests of the available grid services interaction with the different projects for cooperation and at least interoperability

14FOCUSJune 28, 2001 ATLAS Grid Activities - 2 The ATLAS Grid Working Group (EU, US, Japan, etc.) meets regularly during the s/w weeks and 2 ad hoc workshop have been held Jobs have been run between different ATLAS sites using Globus + additions A s/w kit for Atlas Geant3 simulation + reconstruction on any Linux 6.2 or 7.1 machine is available and installed in INFN sites +CERN+ Glasgow and Grenoble. Inclusion of Athena-Atlfast being studied Production style jobs have been run between Milan- CERN-Rome. In July planned with also Glasgow, Grenoble and possibly LUND using the kit

15FOCUSJune 28, 2001 ATLAS Grid Activities - 3 Grid test using also Objectivity federations are planned in summer-autumn ATLAS is involved in the proposed projects EU DataTag and US iVDGL Transatlantic Testbed are between the deliverables Very important for testing EU-US interoperability and getting stricter cooperation between grid projects Links needed at level of projects: at experiment level links in ATLAS are excellent, but this is not enough Tests already going on between US and EU, but a lot of details (authorization, support) to be sorted out. More CERN involvement requested.

16FOCUSJune 28, 2001 CMS Issues CMS is a “single” collaboration  a coordination required among DataGrid, GriPhyN, PPDG and also INFN-Grid “Production” (Data Challenges) and CPT projects are the driving efforts Computing & Core Software=CCS + Physics Reconstruction & Selection= PRS + Trigger and Data Acquisition Systems= TriDAS Being Grid one of the CMS Computing activities, not all the CMS sites and people are committed to Grid Therefore (only) some of the CMS sites will be involved in first testbed activities The “grid” of sites naturally include EU and US sites (+ others countries/continents)

17FOCUSJune 28, 2001 Current CMS Grid Activities Definition of the CMS Applications Requirements to the Grid Planning and implementation of the first tests on “proto-Grid” tools Use of GDMP in some “real production” distributed environment [done] Make use of CA authorization in a distributed sites’ scenario [tested] Use of some Condor distributed (remote) submission schemes [done] Evaluate the PM9 deliverables of WP1 and WP2 (Grid Scheduler and enhanced Replica Management) Test dedicated resources to be provided by the participating sites To provide a reasonable complex and powerful trial To allow for “quasi-real” environment testing for the Applications CERN has a central role in these tests, and adeguate resource support has to be provided aside of the “Experiments’ Data Challenges” needed support.

18FOCUSJune 28, 2001 Current LHCb Datagrid activities Distributed MC production at CERN, RAL, Liverpool and Lyon Extending to Nikhef, Bologna, Edinburgh/Glasgow by end 2001 Current Testbed-0 tests using CERN and RAL (Globus problems encountered) Will extend to other sites in next few months Using parts of MC production system for ‘Short term use case’ Active participation in WP8 from UK, France, Italy, Nikhef, CERN (15 people part-time) Need more dedicated effort

19FOCUSJune 28, Production started by filling out a Web form:  Version of software  Acceptance cuts  Database  Channel  Number of events to generate  Number of jobs to run  Centre where the jobs should run 2. Web form calls a java servlet that:  Creates a job script (one per job)  Creates a cards file (one-three per job) with random number seeds and job options (The cards files need to be accessible by the running job)  Issues a job submit command to run script in batch-> WP1 3. Script does the following:  Copies executable, detector database and cards files  Executes executable  Executable creates output dataset  Output copied to local mass store -> WP5  Log file copied to web browsable area  Script calls java program (see 4) 4. Java program calls servlet at CERN to:  Transfer data back to CERN -> WP2  Update meta-database at CERN -> WP2 LHCb Short term Use Case for GRID

20FOCUSJune 28, 2001 LHCb – Medium to long term planning Would like to work with middleware WPs from now, helping test early prototype software with use case (parts of MC production system) More generally would like to be involved in the development of the architecture for the long term Writing planning document based on ‘use scenarios’- will develop library of use scenarios for testing Datagrid software Starting major project interfacing LHCb OO software framework GAUDI to GRID services Awaiting recruitment of ’new blood’

21FOCUSJune 28, 2001 GAUDI and external services (some GRID based) Converter Algorithm Event Data Service Persistency Service Algorithm Transie nt Event Store Detec. Data Service Persistency Service Transien t Detecto r Store Message Service JobOptions Service Particle Prop. Service Other Services Histogram Service Persistency Service Transien t Histogra m Store Application Manager Converter Event Selector Analysis Program OS Mass Storage Event Database PDG Database DataSet DB Other Monitoring Service Histo Presenter Other Job Service Config. Service GAUDI framework

22FOCUSJune 28, 2001 Issues and concers After a very difficult start the project seems now taking up speed Interaction with the WP’s and the ATF was difficult to establish Experiments are very active a large unfunded effort activity is ongoing Approximately 5-6 people per experiment Funded effort still very slow in coming (2 out of 5!) CERN testbed still insufficiently staffed & equipped DataGrid should have a central role in the LHC computing project – it is not the case now The GRID testbed and the LHC testbed should merge soon Interoperability with US Grid’s is very important We have to keep talking with them and open the testbed to US colleagues