Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001.

Similar presentations


Presentation on theme: "Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001."— Presentation transcript:

1 Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001

2 GridPPe-Science PresentationSlide 2 GridPP Collaboration Meeting 1st GridPP Collaboration Meeting - Coseners House - May 24/25 2001

3 GridPPe-Science PresentationSlide 3 GridPP History Collaboration formed by all UK PP Experimental Groups in 1999 to submit £5.9M JIF bid for Prototype Tier-1 centre at RAL (later superceded) Added some Tier-2 support to become part of PPARC LTSR - “The LHC Computing Challenge”  Input to SR2000 Formed GridPP in Dec 2000 included CERN, CLRC and UK PP Theory Groups From Jan 2001 handling PPARC’s commitment to EU DataGrid UK

4 GridPPe-Science PresentationSlide 4 Proposal Executive Summary £40M 3-Year Programme LHC Computing Challenge = Grid Technology Five Components: –Foundation –Production –Middleware –Exploitation –Value-added Exploitation Emphasis on Grid Services and Core Middleware Integrated with EU DataGrid, PPDG and GriPhyN Facilities at CERN, RAL and up to four UK Tier-2 sites Centres = Dissemination LHC developments integrated into current programme (BaBar, CDF, D0,...) Robust Management Structure Deliverables in March 2002, 2003, 2004

5 GridPPe-Science PresentationSlide 5 GridPP Component Model Foundation Value- added Production Middlewar e Exploitation Component 1: Foundation The key infrastructure at CERN and within the UK Component 2: Production Built on Foundation to provide an environment for experiments to use Component 3: Middleware Connecting the Foundation and the Production environments to create a functional Grid Component 4: Exploitation The applications necessary to deliver Grid based Particle Physics Component 5: Value-Added Full exploitation of Grid potential for Particle Physics £21.0M £25.9M

6 GridPPe-Science PresentationSlide 6 Major Deliverables Prototype I - March 2002 Performance and scalability testing of components Testing of the job scheduling and data replication software from the first DataGrid release. Prototype II - March 2003 Prototyping of the integrated local computing fabric, with emphasis on scaling, reliability and resilience to errors. Performance testing of LHC applications. Distributed HEP and other science application models using the second DataGrid release. Prototype III - March 2004 Full scale testing of the LHC computing model with fabric management and Grid management software for Tier-0 and Tier-1 centres, with some Tier-2 components.

7 GridPPe-Science PresentationSlide 7 First Year Deliverables Each Workgroup has detailed deliverables. These will be refined each year and build on the successes of the previous year. The Global Objectives for the first year are: Deliver EU DataGrid Middleware (First prototype [M9]) Running experiments to integrate their data management systems into existing facilities (e.g. mass storage) Assessment of technological and sociological Grid analysis needs Experiments refine data models for analyses Develop tools to allow bulk data transfer Assess and implement metadata definitions Develop relationships across multi-Tier structures and countries Integrate Monte Carlo production tools Provide experimental software installation kits LHC experiments start Data Challenges Feedback assessment of middleware tools

8 GridPPe-Science PresentationSlide 8 GridPP Organisation Software development organised around a number of Workgroups Hardware development organised around a number Regional Centres Likely Tier-2 Regional Centres Focus for Dissemination and Collaboration with other disciplines and Industry Clear mapping onto Core Regional e-Science Centres

9 GridPPe-Science PresentationSlide 9 GridPP Workgroups A - Workload Management Provision of software that schedule application processing requests amongst resources B - Information Services and Data Management Provision of software tools to provide flexible transparent and reliable access to the data C - Monitoring Services All aspects of monitoring Grid services D - Fabric Management and Mass Storage Integration of heterogeneous resources into common Grid framework E - Security Security mechanisms from Certification Authorities to low level components F - Networking Network fabric provision through to integration of network services into middleware G - Prototype Grid Implementation of a UK Grid prototype tying together new and existing facilities H - Software Support Provide services to enable the development, testing and deployment of middleware and applications at institutes I - Experimental Objectives Responsible for ensuring development of GridPP is driven by needs of UK PP experiments J - Dissemination Ensure good dissemination of developments arising from GridPP into other communities and vice versa Technical work broken down into several workgroups - broad overlap with EU DataGrid

10 GridPPe-Science PresentationSlide 10 GridPP and CERN UK involvement through GridPP will boost CERN investment in key areas: –Fabric management software –Grid security –Grid data management –Networking –Adaptation of physics applications –Computer Centre fabric (Tier-0) For UK to exploit LHC to the full: Requires substantial investment at CERN to support LHC computing.

11 GridPPe-Science PresentationSlide 11 GridPP Management Structure

12 GridPPe-Science PresentationSlide 12 Management Status The Project Management Board (PMB) The Executive board chaired by Project Leader - Project Leader being appointed. Shadow Board in operation The Collaboration Board (CB) The governing body of the project - consists of Group Leaders of all Institutes - established and Collaboration Board Chair elected The Technical Board (TB) The main working forum chaired by the Technical Board Chair - interim task force in place The Experiments Board (EB) The forum for experimental input into the project - nominations from experiments underway The Peer Review Selection Committee (PRSC) Pending approval of Project The Dissemination Board (DB) Pending approval of Project

13 GridPPe-Science PresentationSlide 13 GridPP is not just LHC UK

14 GridPPe-Science PresentationSlide 14 Tier1&2 Plans RAL already has 300 cpus, 10TB disk, and STK tape silo which can hold 330TB Install significant capacity at RAL this year to meet BaBar TierA Centre requirements Integrate with worldwide BaBar work Integrate with DataGrid testbed Integrate Tier1 and 2 within GridPP Upgrade Tier2 centres through SRIF (UK university funding programme)

15 GridPPe-Science PresentationSlide 15 Tier1 Integrated Resources

16 GridPPe-Science PresentationSlide 16 Liverpool MAP - 300 cpus + several TB of disk –delivered simulation for LHCb and others for several years Upgrades of cpus and storage planned for 2001 and 2002 –currently adding Globus –develop to allow analysis work also

17 GridPPe-Science PresentationSlide 17 Imperial College Currently –180 cpus –4TB disk 2002 –adding new cluster in 2002 –shared with Computational Engineering –850 nodes –20TB disk –24TB tape CMS, BaBar, D0

18 GridPPe-Science PresentationSlide 18 Not Fully Installed Lancaster Worker 196 Worker CPUs Switch Controller Node Switch 500 GB Bulkserver 100 MB/s Ethernet Tape Library Capacity ~ 30 TB k£11/30 TB Worker Controller Node 500 GB Bulkserver 1000 MB/s Ethernet Fiber Finalizing Installation of Mass Storage System ~ 2 Months

19 GridPPe-Science PresentationSlide 19 Lancaster Currently D0 –analysis data from FNAL for UK –simulation Future –upgrades planned –Tier2 RC –Atlas-specific

20 GridPPe-Science PresentationSlide 20 Tendering now 128 CPU at Glasgow 5 TB Datastore + server at Edinburgh ATLAS/LHCb Plans for future upgrades to 2006 Linked with UK Grid National Centre ScotGrid

21 GridPPe-Science PresentationSlide 21 Network UK Academic Network, SuperJANET entered phase 4 in 2001 2.5GB backbone, December 2000-April 2001 622Mbit to RAL, April 2001 Most MANs have plans for 2.5GB on their backbones Peering with GEANT planned at 2.5GB

22 GridPPe-Science PresentationSlide 22 Wider UK Grid Prof Tony Hey leading Core Grid Programme UK National Grid –National Centre –9 Regional Centres Computer Science lead includes many sites with PP links –Grid Support Centre (CLRC) –Grid Starter Kit vesion 1 based on Globus, Condor,...... Common software e-science Institute Grid Network Team Strong Industrial Links All Research Areas have their own e-science plans

23 GridPPe-Science PresentationSlide 23 Summary UK has plans for a national grid for particle physics –to deliver the computing for several virtual organisations (LHC and non-LHC) Collaboration established, proposal approved, plan in place Will deliver –UK commitment to DataGrid, –prototype Tier1 and 2 –UK commitment to US experiments Work closely with other disciplines Have been working towards this project for ~ 2 years building up hardware Funds installation and operation of experimental testbeds, key infrastructure, generic middleware and making application code grid aware

24 GridPPe-Science PresentationSlide 24 The End

25 GridPPe-Science PresentationSlide 25 UK Strengths Wish to build on UK strengths - Information Services Networking Security Mass Storage UK Major Grid Leadership roles - Lead DataGrid Architecture Task Force (Steve Fisher) Lead DataGrid WP3 Information Services (Robin Middleton) Lead DataGrid WP5 Mass Storage (John Gordon) Strong Networking Role in WP7 (Peter Clarke, Robin Tasker) ATLAS Software Coordinator (Norman McCubbin) LHCb Grid Coordinator (Frank Harris) Strong UK Collaboration with Globus Globus people gave 2 day tutorial at RAL to PP community Carl Kesselman attended UK Grid Technical meeting 3 UK people visited Globus at Argonne Natural UK Collaboration with US PPDG and GriPhyN

26 GridPPe-Science PresentationSlide 26 8x80cpu farms 10 sites with 12TB disk and Suns simulation data mirroring from SLAC - Kanga, Objectiity data movement and mirroring across UK data location discovery across UK - mySQL remote job submission - Globus and PBS common usernames across UK - GSI gridmapfiles Find data - submit job to data - register output BaBar planning a distributed computing model –TierA centres BaBar

27 GridPPe-Science PresentationSlide 27 CDF Similar model to BaBar with disk and cpu resources at RAL and universities plus farm for simulation. Development of Grid access to CDF databases. Data replication from FNAL to UK and around UK Data Location Discovery through metadata

28 GridPPe-Science PresentationSlide 28 D0 Large data centre at Lancaster ship data from FNAL to UK simulation in UK and ship data back to FNAL Gridify SAM access to data –data at FNAL and Lancaster


Download ppt "Robin Middleton RAL/PPD DG Co-ordination Rome, 23rd June 2001."

Similar presentations


Ads by Google