Download presentation
Presentation is loading. Please wait.
Published byNorman Horton Modified over 9 years ago
1
13/3/02M.Mazzucato – LCG – Launch Week1 LCG Grid Deployment Session Mirco Mazzucato INFN-Padova LCG Launch Week Grid Deployment Regional Centre Policies Area Manager
2
13/3/02M.Mazzucato – LCG – Launch Week2 Grid Deployment area in LCG Workshop Grid domain splitted in three areas (See Les introductory talk) Grid Technology – Fabrizio Gagliardi ensuring that the appropriate middleware is available Deal with Grid projects architectures, developments and testbeds Grid Deployment Regional Centre & Grid Deployment Policy – Mirco Mazzucato authentication, authorisation, formal agreements, computing rules, sharing, reporting, accounting,.. Data Challenge & Grid Operation – open post stable, reliable, manageable Grid for – Data Challenges and regular production work Approach to this presentation Provide input for productive discussion on deployment activities Some caveat: some experiments are still re-defining their plans Need some time to make a detailed project workplan
3
13/3/02M.Mazzucato – LCG – Launch Week3 Talk Outline Overview of experiments distributed production activities and DC plans Grid deployment Area role and scope Possible domains of common activities Candidate RTAGs for SC2 Short term program
4
13/3/02M.Mazzucato – LCG – Launch Week4 ALICE High-level Goals & Priorities for Data Challenges © Disney All Rights Reserved
5
13/3/02M.Mazzucato – LCG – Launch Week5 Data & Physics Challenges Objectives Performance milestones up to 2006 (Focused on CERN fabric) Demonstrate increasing bandwidth over the complete chain up to Mass Storage Demonstrate increase total amount of data to Mass Storage Final goal 1,25 GB/s to mass storage Physics milestones Three large physics challenges up to 2006 Large distributed productions already done over AliEn Immediate needs for the ADC IV (2002) Hardware, Linux support, CASTOR development to solve problems of ADC III Data Challenges has to be seen as a priority from the LCG and there must be a clear commitment to them
6
13/3/02M.Mazzucato – LCG – Launch Week6 ALICE GRID plans Develop AliEn as a Meta-GRID interface to different MiddleWare Propose AliEn as a candidate common solution Actively participate to GRID testbeds Use the GRID infrastructure for physics and data challenges AliEn User Interface AliEn stack iVDGL stack EDG stack
7
13/3/02M.Mazzucato – LCG – Launch Week7 ALICE GRID resources http://www.to.infn.it/activities/experiments/alice-grid 37 people 21 institutions Yerevan CERN Saclay Lyon Dubna Capetown, ZA Birmingham Cagliari NIKHEF GSI Catania Bologna Torino Padova IRB Kolkata, India OSU/OSC LBL/NERSC Merida Bari Nantes
8
13/3/02M.Mazzucato – LCG – Launch Week8 GENIUS DataGRID portal
9
13/3/02M.Mazzucato – LCG – Launch Week9 ATLAS Data Challenges DC1 (2002) Goals: Reconstruction & analysis on a large scale Data management Use (evaluate) database technology Learn about distributed analysis Need to produce data for High Level Trigger TDR due by end 2002 Involvement of CERN & outside-CERN sites Use of GRID as and when possible and appropriate Due to ‘conflicting’ requirements decided to split DC1 in two phases Phase I (April-July 2002) Primary concern is delivery of events to HLT community Phase II (September-December 2002) Introduction & testing of new Event Data Model (EDM) Evaluation of ‘persistency’ Intensive use of Geant4 Software ported to the GRID environment Testing of computing model Testing of distributed analysis using AOD
10
13/3/02M.Mazzucato – LCG – Launch Week10 ATLAS-Tools for DC1 Plan is to use Grid tools as much as possible Job submission Data replication gdmp ogdmp_stage for Castor@CERN; HPSS@Lyon; ATLAS datastore@RAL Data moving (Globus_url_copy) Catalog (Magda from PPDG) AMD (ATLAS Metadata Database) is being developed (Grenoble) Close collaboration between ATLAS Grid and ATLAS DC communities
11
13/3/02M.Mazzucato – LCG – Launch Week11 Expression of Interest (not exhaustive…) EDG & DC1 sites CERN INFN CNAF, Milan, Roma1, Naples CCIN2P3 Lyon IFIC Valencia Karlsruhe UK RAL, Birmingham, Cambridge, Glasgow, Lancaster Also: FCUL Lisboa, “Nordic” cluster, Prague BNL Russia (RIVK BAK) JINR Dubna, ITEP Moscow SINP MSU Moscow IHEP Protvino Alberta Tokyo Taiwan Melbourne ……
12
13/3/02M.Mazzucato – LCG – Launch Week12 ATLAS Data Challenges DC2 (2003) DC2 Spring-Autumn 2003 Scope will depend on what has been achieved in DC1 At this stage the goal includes: Use of ‘TestBed’ which will be built in the context of the Phase 1 of the “LHC Computing Grid Project” oScale at a sample of 10 8 events Extensive use of the GRID middleware Provide data for Physics community
13
13/3/02M.Mazzucato – LCG – Launch Week13 Grid tools for ATLAS-DC2 Distributed production, simulation, reconstruction and analysis Some maturity of ‘grid’ services Automatic ‘splitting’, ‘gathering’ of long jobs Best available sites for each job Monitoring on a ‘gridified’ logging and bookkeeping system Interface to a full ‘replica catalog’ system Transparent access to the data for different MSS system Grid accounting system We intend to follow what is going on in various Grid projects (EDG or other projects)
14
13/3/02M.Mazzucato – LCG – Launch Week14 Needed for DAQ TDR. Not a Data Challenge! 12 Regional Centres (24 sites), 9 already started official production Caltech, Cern, Fnal, IC, Infn, Moscow, UCSD, UFl, (IN2P3) +Bristol, HIP, Wisconsin. Use of common software: IMPALA, BOSS, RefDB, DAR executables + geometry file Common test-assignment File transfer based on GDMP and bbftp Pilot sites in USA will use IMPALA/MOP to submit production jobs to grid resources CMS Data Production 2002 LCG Launching workshop – CERN 11-15 / 3 / 2002
15
13/3/02M.Mazzucato – LCG – Launch Week15 CMS CPU resources 4/3/02: 700 active CPUs plus 400 CPUs to come LCG Launching workshop – CERN 11-15 / 3 / 2002
16
13/3/02M.Mazzucato – LCG – Launch Week16 CMS DataTag test bed being set up now IMPALA/BOSS + DataGrid release 1.1.1 SW Test bed opened to US sites in a few weeks CMS US grid testbed already set up Productions with MOP Develop Virtual Data Grid System Unique Intergrid testbed late this year Test integration of EDG and Virtual Data Sys. Increase the scale of tests CMS Grid Integration Task started in dec. ‘01 Grid Implementation Plan to be delivered soon Grid-enabled test beds in 2002 LCG Launching workshop – CERN 11-15 / 3 / 2002
17
13/3/02M.Mazzucato – LCG – Launch Week17 CMS : CPT Level 1 Milestones (V31. Not Including any LHC delays) LHC beam 3yr 2yr 1yr Physics TDR D.P.S.
18
13/3/02M.Mazzucato – LCG – Launch Week18 Change baseline choice for persistency New framework delivered as soon as DAQ TDR will be released Need to re-think to 2003 and later Data Challenge plans Need to get back on doubling track (x2 each year in complexity) Analysis issues are the least well understood Grid tool tests will go in parallel CMS Plans for next years LCG Launching workshop – CERN 11-15 / 3 / 2002
19
13/3/02M.Mazzucato – LCG – Launch Week19 LHCb - Data Challenge 1 (2002) Principle Goal: To check the performance of the overall LHCb software and data processing model for large scale data flows. Total ~5*10**6 evs/month (~20 TB/month total) Other possible goals If stable Grid technology is available LHCb will test its performance in large scale tests If tests are very successful LHVb will use the data as ‘production’ data. LHCb plan Schedule 2 period of one month each (3-30 June / 26 August -22 September) 100 – 150 CPU dedicated at CERN (total power ~ 3000 SI95) ??~600 CPUs available outside(total power ~ 10000 SI95)- availability being negotiated??
20
13/3/02M.Mazzucato – LCG – Launch Week20 LHCb:Data Challenge 1 – the components to test prototype of new data management databases configuration of LHCb environment at remote sites and installation kits monitoring and control with PVSS integrability of EDG testbed into our production system data quality checking, evaluate DaVinci based tools(Analysis) use "push" MC job submission initially, perhaps test first version of pull mechanism during second half executables to be tested in production: Sicbmc, Brunel(Reconstruction), DaVinci(Analysis)
21
13/3/02M.Mazzucato – LCG – Launch Week21 LHCb: Data Challenge 1 – participating sites N.B. LHCb has been running distributed MC production for several years. So much of the basic organisation is in place. Need to move to new software and technology without prejudicing production and physics CERN (Tier-0/1) Bologna, IN2P3, RAL,Nikhef (Tier-1) Edinburgh/Glasgow, Liverpool,Oxford,Bristol (Tier- 2/3)
22
13/3/02M.Mazzucato – LCG – Launch Week22 LHCb: Data Challenge 2 (2003) production version of new data management databases implementation of "pull" MC job submission system, exploiting EDG resource broker stress test Grid "philosophy" of submitting jobs without worrying where the job will run where the output will be stored monitoring and control, evaluate EDG middleware how does it work with PVSS? can it replace it? make job submission tools independent of remote sites executables to be tested: (sicbmc), Gauss, Brunel, DaVinci, Gaudi test of first Ganga prototype Test Sites Same sites as DC1 + Barcelona, Switzerland, Germany, Moscow,Poland,Brazil(mixture of Tier-1/2)
23
13/3/02M.Mazzucato – LCG – Launch Week23 LHC Experiments DC: some conclusions Some experiment DC plans still being re-defined Phase out of Objectivity, new LHC Schedule All LHC experiments are interfacing/adapting their production environment to the available Grid services Quite a significant number of Tiers are already involved in large scale distributed productions and Grid testbeds (G. Wormser ) Experience is being gained in distributed collaborative computing efforts directed toward a common a goal (CMS and LHCb world- wide productions.., world-wide ALICE Grid testbed) Centrally defined priorities, shared policies and centrally defined productions environments are becoming common practice Many EU and US Tiers are keen to take part to experiment Data Challenges adopting common Grid basic components Selection of basic set of grid services to be evaluated started (independently) in almost all experiments
24
13/3/02M.Mazzucato – LCG – Launch Week24 LCG Grid Deployment scope : activities(1) Here are some ideas to be discussed by expts and Tiers LCG provide a framework for experiments and Tiers to: Agree on a common set of basic Grid components to be adopted for DC. Evaluate/select existing component flavors to be deployed focusing on LHC expts DC needs Robustness versus functionality Define Grid components deployment Schedules provide solid foundation to LHC Experiment DC plans Create support practices/structure of common Grid services for Tiers and expts users Long term planning Input from common Use case to steer LCG future Grid services deployment SC2 RTAG
25
13/3/02M.Mazzucato – LCG – Launch Week25 LCG Grid Deployment scope: activities(2) Get input from four LHC expts and Tiers to prepare and world-wide deploy DC infrastructure Connected and contiguos to Grid project testbeds Island of stability in the continuosly moving sea of prototypes Involve Tiers to define deployment plans and schedules Get input from SC2 RTAG for common solution for production and analysis architecture common interfaces to Grid services for production and analysis Support integration of increasingly complex Grid architectures into expts product. and analysis environment Iterative deployment cycle focused on DC schedules For DC1 privilege robustness wrt functionalities and scalability?
26
13/3/02M.Mazzucato – LCG – Launch Week26 LCG Grid Deployment: the social issue Grid deployment is not a purely technical issue but has a lot of political and social implications Need to define roles of expts and Tiers Normally expts define priorities and central policies to reach their goals Tiers Centers must guarantee implementation and compatibility with local policies should define what is common(LCG) and what experiment specific Grid Middleware enable technically VO to share resources, but does not automatically permit it: e.g. a local authorization must be granted by the owner of resources independently by the chosen technology e.g. accounting policies for sharing must be agreed
27
13/3/02M.Mazzucato – LCG – Launch Week27 LCG Grid Deployment scope: activities(3) Establish mutual confidence and trust relationship between expts and RC people provide solid foundation to expts DC defining shared distributed management policies and agreements between Institutions/Tiers ( MoU? ) Create specialized teams for different topics: how Security(Authentication, Authorization, Accounting) and acceptable sharing of resources is done in a satisfactory way for expts and Tiers create common vision and persuade sites to trust each other Establish common practices in Tiers user management through exchange of people and exchange of tools
28
13/3/02M.Mazzucato – LCG – Launch Week28 Security Activities: Authentication All grid projects rely on Globus Security Infrastructure (GSI) based on Private Key Infrastructure (PKI) and X.509 format certificates EDG WP6 Certificate Authorities Group (D. Kelsey) have defined procedures for Authentication/Trust Already used in TB1 CERN, Czech Rep, France, Ireland, Italy, Netherlands, Nordic, Portugal, Russia, Spain, UK USA DOE CA now in operation and being included Same for Karlsruhe Is this good for LCG DC? Tiers centers need to assess/endorse above procedures Formal agreements, MoUs ?
29
13/3/02M.Mazzucato – LCG – Launch Week29 Security Activities: Authorization IN EDG TB1 Each VO manages an LDAP Directory members (ou=People); groups (e.g. ou=Testbed1): grid-mapfiles used for authorize usage of a resource are generated on each site from the VO Directories: looking for the members of the groups; according to users’ attributes (the Certificate Subject, for the moment); according to the existence of an entry with the same Certificate Subject in an “Authorization Directory”; Authorization tools are now evolving towards Common Authorization Services How does these procedures match with RC current practices? What’s the role of expts and RC’s in providing Authorization?
30
13/3/02M.Mazzucato – LCG – Launch Week30 Security Activities: account General tools for accounting not yet available EDG release 2, SecureGrid….US Certainly HEP (at least CERN) has not a tradition for billing but just monitor usage The key issue is to establish a model for return of investment that favors the deployment of local resources and their opening to international users
31
13/3/02M.Mazzucato – LCG – Launch Week31 LCG Deployment Area: short term goal set up a working global grid that can be used for data challenges this is being done by individual experiments we need to do this oacross US-Europe-Asia--- oand across experiments take GLUE as a start selects common components from existing Grid projects builds on experience of current testbeds European & US Grid projects will integrate GLUE proposals concentrate on productisation (= reliability, predicatbility)
32
13/3/02M.Mazzucato – LCG – Launch Week32 Proposal - LCG deployment goals- next 12 months Three activities Grid Services Integration Define the first “LCG Grid Services Package” (basic, robust, common): start from GLUE as a basic set of common interoperable services agree with the LHC expts that these are sufficient for short term DCs (i.e. satisfy the RTAG requirements) agree with the Regional Centres that these are compatible with their policies make these services robust enough for DC Deployment agree which experiments, and which regional centres are involved in the first deployment set up the deployment plan in collaboration with the DC coordinators – WBS, resources, timescales agree the plan with the SC2 (experiments, regional centres) do it Integration of the LCG Grid Services Package in the Experiments’ Production and Analysis Environment provide LCG support & expertise for the package
33
13/3/02M.Mazzucato – LCG – Launch Week33 Defining the “LCG Grid Services” Assume that the LCG Grid is deployed as a stable, robust, world-wide facility new functionality added after agreement by experiments and regional centres at least 6 months between major releases Relation with the Grid technology projects LCG feeds back experience and suggestions negotiate middleware support negotiate use of Grid project resources for LCG production testbed The Grid projects define their requirements (hopefully taking LCG into account) and run their own development testbeds SC2
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.