Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tony Doyle “GridPP – Year 2 to Year 3”, Collaboration Meeting, Bristol, 22 September 2003.

Similar presentations


Presentation on theme: "Tony Doyle “GridPP – Year 2 to Year 3”, Collaboration Meeting, Bristol, 22 September 2003."— Presentation transcript:

1 Tony Doyle a.doyle@physics.gla.ac.uk “GridPP – Year 2 to Year 3”, Collaboration Meeting, Bristol, 22 September 2003

2 Tony Doyle - University of GlasgowOutline Challenges Management The Project Map GridPP Status: two years on.. UK Grid Users Deployment – LCG and UK perspective Current Resources EDG 2.0/LCG 1.0 Deployment Status Accounting Today’s Operations Future Operations Planning Middleware Status Middleware Evolution GridPP2 Planning Status Dissemination Summary

3 Tony Doyle - University of Glasgow The Challenges Ahead: Event Selection 9 orders of magnitude The HIGGS All interactions

4 Tony Doyle - University of Glasgow The Challenges Ahead: Complexity Understand/interpret data via numerically intensive simulations: e.g. ATLAS Monte Carlo (gg H bb) 182 sec/3.5 MB event on 1 GHz linux box Many events –~10 9 events/experiment/year –>~1 MB/event raw data –several passes required  Worldwide Grid computing requirement (2007): ~300 TeraIPS (100,000 of today’s fastest processors connected via a Grid) 16 Million channels 100 kHz LEVEL-1 TRIGGER 1 MegaByte EVENT DATA 200 GigaByte BUFFERS 500 Readout memories 3Gigacell buffers 500 Gigabit/s Gigabit/s SERVICE LAN PetaByte ARCHIVE EnergyTracks Networks 1 Terabit/s (50000 DATA CHANNELS) 20TeraIPS EVENT BUILDER EVENT FILTER 40 MHz COLLISION RATE ChargeTimePattern Detectors Grid Computing Service 300TeraIPS

5 Tony Doyle - University of Glasgow The Challenges Ahead: Experiment Requirements: UK only Total Requirement:

6 Tony Doyle - University of Glasgow Institutes GridPP GridPP in Context Core e-Science Programme GridPP CERN LCG Tier-1/A Middleware Experiments Tier-2 Grid Support Centre EGEE Not to scale! Apps Dev Apps Int

7 Tony Doyle - University of Glasgow GridPP Management CB (20 members) meets half-yearly to provide Institute overview PMB (12 members) meets weekly [via VC] to provide management of project TB (10 members) meet as required in response to technical needs and regularly via phone EB (14 members) meet quarterly to provide experiments input

8 Tony Doyle - University of Glasgow GridPP Project Overview

9 Tony Doyle - University of Glasgow GridPP Status: The Project Map

10 Tony Doyle - University of Glasgow Financial Breakdown Five components –Tier-1/A = Hardware + 10 CLRC e-Science Staff –DataGrid = 25 DataGrid Posts inc. CLRC PPD Staff –Applications = 17 Experiments Posts (to interface middleware) –Operations = Travel (~100 people)+ Management +  Early Investment –CERN = 25 LCG posts + Tier-0 +  LTA

11 Tony Doyle - University of Glasgow Quarterly Reporting Quarterly reporting allows comparison of delivered effort with expected effort Feedback loop as issues arise

12 Tony Doyle - University of Glasgow Funded Effort Breakdown (Snapshot 2003Q3) LCG effort is largest single area of GridPP Future project priorities focussed on LCG and EGEE

13 Tony Doyle - University of Glasgow GridPP Status: Summary GridPP1 has now completed 2 ex 3 years All metrics are currently satisfied 103 of 182 tasks are complete 70 tasks not yet complete or overdue 9 tasks are overdue: –6 are associated with LCG 2 of these are trivial (definition of future milestones) 4 of these are related to the delay in LCG-1 –2 are associated with applications (CMS and D0) –1 is associated with the UK infrastructure (test of a heterogeneous testbed)

14 Tony Doyle - University of Glasgow Risk Register (Status April 03) Scaling up to a production system (LCG-1 deployment) System management effort at UK Tier-2 sites (being addressed as part of GridPP2)

15 Tony Doyle - University of Glasgow UK Certificates and VO membership 1.UK e-Science CA now used in production EDG testbed 2.PP “users” engaged from many institutes 3.UK participating in 6 ex 9 EDG VOs 1. 2. 3.

16 Tony Doyle - University of Glasgow Certification and distribution process established Middleware package – components from –European DataGrid (EDG2.0) –US (Globus, Condor, PPDG, GriPhyN)  Virtual Data Toolkit, VDT 1.1 Agreement reached on principles for registration and security RAL to provide the initial grid operations centre FZK to operate the call centre Initial service being deployed now to 10 centres US, Europe, Asia Expand to other centres as soon as the service is stable LCG Academia Sinica Taipei, BNL, CERN, CNAF, FNAL, FZK, IN2P3 Lyon, Moscow State Univ., RAL, Univ. Tokyo LHC Computing Grid Service

17 Tony Doyle - University of Glasgow UK Deployment Overview Significant resources within EDG. Currently being upgraded to EDG2. Integrating EDG on farms has been repeated many times but it is difficult. Sites are keen to take part within EDG2 currently, with LCG1 deployment after this. By the end of the year many HEP farms plan to be contributing to LCG1 resources. Basis of Deployment Input to LCG Plan. Input from Tier-1 (~50%) initially and four distributed Tier-2’s (50%) on ~Q1 2004 timescale. CPU (kSI2K) Disk TB Support FTE Tape TB CERN70016010.01000 Czech Repub 6052.55 France4208110.2540 Germany207409.062 Holland12434.012 Italy5076016.0100 Japan220455.0100 Poland8695.028 Russia1203010.040 Taiwan220304.0120 Spain150304.0100 Sweden179402.040 Switzerland2652.040 UK165622617.3295 USA80117615.51741 Total56001169120.04223 LCG Resources committed for 1Q04

18 Tony Doyle - University of Glasgow EDG 2.0 Deployment Status 12/9/03 RAL (Tier1A): Up and running with 2.0.1. UI gppui04 available (as part of CSF) and offer to give access to LCFGng node to help people compare with their own LCFGng setup. IC: Existing WP3 testbed site is at 2.0.0. Standard 2.0 RB available UCL: Trying to go to 2.0: SE up so far. QMUL: 2.0 installation ongoing. RAL (PPD): 2.0.0 site up and running. Oxford: wait until October for 2.0. Birmingham: Working on getting a 2.0 site up next week Bristol: WP3 testbed site at 2.0.0. Also doing a new 2.0 site install. UI and MON up, still doing CE, SE and WN. Cambridge to follow. Manchester: Trying to get 2.0.1 set up. Glasgow: Concentrating on commissioning new hardware during the next month. Wait until then before going to 2.0. Edinburgh to follow.

19 Tony Doyle - University of Glasgow Tier-1 @ RAL CE SE LCG 1.0/EDG 2.0 5xWN LCG Testbed CE LCG 1.0/EDG 2.0 230xWN Tier-1/A CE SE  EDG 2.0 WP3 Testbed MON CE SE EDG 2.0 1xWN EDG Dev Testbed MON SE ADS UI within CSF. NM for EDG2. Top level MDS for EDG. Various WP3 and WP5 dev nodes. VOMS for DEV TB. http://ganglia.gridpp.rl.ac.uk/ SE LCG0 Testbed CE 1xWN

20 Tony Doyle - University of Glasgow London Grid: Imperial College CE SE EDG 2.0 EDG Testbed WNs CE EDG 2.0 WNs BaBar Farm CE SE CMS-LCG0 WN CE SE  EDG 2.0 1xWN WP3 Testbed MON RB for EDG 2.0. Plan to be in LCG1 and other testbeds.

21 Tony Doyle - University of Glasgow London Grid: Queen Mary and UCL CE SE EDG 1.4 1xWN EDG Testbed 32xWN Queen Mary CE also feeds EDG jobs to 32 node e-Science farm. Plan to have LCG1/EDG2 running for the end of the year. Expansion with SRIF grants.(64WN+2TB in Jan 2004, 100WN + 8TB in Dec 2004.) http://194.36.10.1/ganglia-webfrontend CE SE EDG 1.4 1xWN EDG Testbed UCL Network Monitors for WP7 development. SRIF bid in place for  200 CPUs for the end of the year to join LCG1.

22 Tony Doyle - University of Glasgow Southern Grid: Bristol CE SE EDG 2.0 1xWN EDG Testbed CE SE  EDG 2.0 1xWN WP3 Testbed MON CE SE CMS-LCG0 CMS/LHCb Farm 24xWN CE SE EDG 1.4 BaBar Farm 78xWN GridPP RC. Plan to join LCG1

23 Tony Doyle - University of Glasgow Southern Grid: Cambridge and Oxford CE SE EDG 1.4 15xWN EDG Testbed Cambridge farm shared with local NA-48, GANGA users. Some RH73 WNs for ongoing ATLAS challenge. 3TB GridFTP-SE. Plan to join LCG1/EDG2 later in the year with an extra 50 CPUs. EDG jobs will be fed into local e-Science farm. http://farm002.hep.phy.cam.ac.uk/cavendish/ CE SE EDG 1.4 2xWN EDG Testbed Oxford: Plan to join EDG2/LCG1. Nagios monitoring has been set up. (RAL is also evaluating Nagios) Planning to send EDG jobs into 10 WN CDF farm. 128 node cluster being ordered now.

24 Tony Doyle - University of Glasgow Southern Grid: RAL PPD and Birmingham CE SE EDG 2.0 9xWN EDG Testbed CE SE  EDG 2.0 1xWN WP3 Testbed MON PPD User Interface Part of Southern Tier2 Centre within LCG1. 50 CPUs and 5TB of disk expected for the end of year. CE SE EDG 1.4 1xWN EDG Testbed Birmingham: Expansion to 60 CPUs and 4TBs. Expect to participate within LCG1/EDG2

25 Tony Doyle - University of Glasgow NorthGrid: Manchester and Liverpool CE SE(1.5TB) EDG 1.4 80xWN CE SE(5TB) EDG 1.4 60xWN CE SE EDG 1.4 9xWN EDG Testbed BaBar Farm DZero Farm GridPP and BaBar VO Servers. User Interface Plan that DZero farm will join LCG. SRIF bid in place for significant HEP resources. CE SE EDG 1.4 1xWN EDG Testbed Liverpool plan to follow EDG 2, possibly integrating newly installed Dell (funded by NW Development Agency) and BaBar farm. Largest single Tier-2 resource.

26 Tony Doyle - University of Glasgow ScotGrid: Glasgow, Edinburgh and Durham CE SE EDG 1.4 ScotGRID 59xWN WNs on a private network with outbound NAT in place. Various WP2 development boxes. 34 dual blade servers just arrived. 5TB FastT500 expected soon. Shared resources (CDF and Bioinformatics) CE SE  EDG 2.0 WP3 Testbed MON Edinburgh: 24TB FastT700 and 8-way server just arrived. Durham: existing farm available. Plan to be part of LCG. CDF LHC BIO

27 Tony Doyle - University of Glasgow Testbed Status Summer 2003 UK-wide development using EU-DataGrid tools (v1.47). Deployed during Sept 02-03. Currently being upgraded to v2.0. See http://www.gridpp.ac.uk/map/

28 Tony Doyle - University of Glasgow Meeting Current LHC Requirements: Experiment Accounting Experiment- driven project. Priorities determined by Experiments Board.

29 Tony Doyle - University of Glasgow Tier-1/A Accounting LHCb ATLAS CMS Monthly accounting: Online Ganglia-based monitoring, see: http://www.gridpp.ac.uk/tier1a/ Last month: CMS and BaBar jobs. Annual accounting: ATLAS, CMS and LHCb jobs. Generally dominated by BaBar since January. BaBar

30 Tony Doyle - University of Glasgow Today’s Operations 1.Support Team built from sysadmins. 4 funded by GridPP to work on EDG WP6, the rest are site sysadmins. 2.Methods Email list, phone meetings, personal visits, job submission monitoring RB, VO, RC for UK use to support non-EDG use 3.Rollout Experience from RAL in EDG dev testbeds and IC and Bristol in CMS testbeds 10 sites have been part of EDG app testbed at one time

31 Tony Doyle - University of Glasgow GridPP2 Operations To move from testbed to production, GridPP plans a bigger team with a full- time Operations Manager Manpower will be from the Tier-1 and Tier-2 Centres who will contribute to the Production Team The team will run a UK Grid which will belong to various grids (EDG, LCG,..) and also support other experiments Tagged release selected for certification Certified release selected for deployment Tagged package Problem reports add unit tested code to repository Run nightly build & auto. tests Grid certification Fix problems Application Certification Build System Certification Testbed ~40CPU Production Testbed ~1000CPU Certified public release for use by apps. 24x7 Build system Test Group WPs Unit Test Bui ld Certificati on Producti on Users Development Testbed ~15CPU Individual WP tests Integratio n Team Integration Overall release tests Releases candidate Tagged Releases Releases candidate Certified Releases Apps. Representative s

32 Tony Doyle - University of Glasgow LCG Operations RAL has led project to develop an Operations Centre for LCG1 –Applied GridPP and MapCenter monitoring to LCG1 –Dashboard combining several types of monitoring –Set up a web site with contact information –Developing Security Plan –Accounting (the current priority, building upon resource centre and experiment accounting) RAL is also leading the LCG Security Group –written 4 documents setting out procedures and User Rules –working with GOC task force on Security Policy –Risk Analysis and further planning for LCG in 2004

33 Tony Doyle - University of Glasgow

34

35 EGEE Technical Annex: nearing completion

36 Tony Doyle - University of GlasgowEGEE Tier1 (16.5 FTE) UK Team (8 FTE) UK GSC (2 FTE) (2FTE) EGEE ROC (5 FTE) EGEE CIC (4.5 FTE) The UK Production Team will be expanded as part of EGEE ROC and CIC posts to meet EGEE requirements To deliver an EGEE grid infrastructure that must also deliver to other communities and projects Could do this just within PP (matching funding available) but also want to engage fully with UK Core programme Ongoing discussions…

37 Tony Doyle - University of Glasgow Possibility for future ~ 1 year timescale (10/7/03, PC) Possibility for future ~ 1 year timescale (10/7/03, PC) GT2 L2 Grid Application GT2 PP Grid PP Application EGEE-0 (GT2) If UK is backing EGEE (which it is)....then it probably makes sense to embrace EGEE-0 which will be based upon GT2 … this way we would influence development, and reap benefit of leverage

38 Tony Doyle - University of Glasgow “The Italian Job” Management structure for the production Grid ~36 people ExperimentsGRID Projects Coordination Committee ManagementOperations Planning/Deployment resource Policy usage Management tools Central Team Monitoring Authorization Site-man GridService support VO support User support Application Grid Resource Coordination Experimemt or research org. support release Configuration management 4p 6p 8p 4p 2p Release distribution, documentation and porting EGEE, LCG coord 2p

39 Tony Doyle - University of Glasgow Tier-1/A Services [FTE] High quality data services National and International Role UK Focus for International Grid development Highest single priority within GridPP2 Current Planning CPU 2.0 Disk 1.5 AFS 0.0 Tape 2.5 Core Services 2.0 Operations 2.5 Networking 0.5 Security 0.0 Deployment 2.0 Experiments 2.0 Management 1.5 Total 16.5

40 Tony Doyle - University of Glasgow Tier-2 Services [FTE] Four Regional Tier-2 Centres London: Brunel, Imperial College, QMUL, RHUL, UCL. SouthGrid: Birmingham, Bristol, Cambridge, Oxford, RAL PPD. NorthGrid: CLRC Daresbury, Lancaster, Liverpool, Manchester, Sheffield. ScotGrid: Durham, Edinburgh, Glasgow. Hardware provided by Institutes GridPP provides added manpower Current Planning Y 1Y 2Y 3 Hardware Support4.08.0 Core Services4.0 User Support1.02.0 Specialist Services Security1.0 Resource Broker1.0 Network0.5 Data Management2.0 VO Management0.5 14.019.0 Existing Staff-4.0 GridPP210.015.0 Total SY 40.0

41 Tony Doyle - University of Glasgow Operational Roles Core Infrastructure Services (CIC) –Grid information services –Monitoring services –Resource brokering –Allocation and scheduling services –Replica data catalogues –Authorisation services –Accounting services Still to be defined fully in EGEE Core Operational Tasks (ROC) –Monitor infrastructure, components and services –Troubleshooting –Verification of new sites joining Grid –Acceptance tests of new middleware releases –Verify suppliers are meeting SLA –Performance tuning and optimisation –Publishing use figures and accounts

42 Tony Doyle - University of Glasgow LCG Level 1 Milestones proposed to LHCC M1.1 - June 03First Global Grid Service (LCG-1) available -- this milestone and M1.3 defined in detail by end 2002 M1.2 - June 03Hybrid Event Store (Persistency Framework) available for general users M1.3a - November 03LCG-1 reliability and performance targets achieved M1.3b - November 03Distributed batch production using grid services M1.4 - May 04Distributed end-user interactive analysis -- detailed definition of this milestone by November 03 M1.5 - December 04“50% prototype” (LCG-3) available -- detailed definition of this milestone by June 04 M1.6 - March 05Full Persistency Framework M1.7 - June 05LHC Global Grid TDR

43 Tony Doyle - University of Glasgow RB Stress Tests (9/9/03): Benchmark for LCG Deployment Status (LCG report by Ian Bird): RB never crashed ran without problems at a load of 8.0 for several days in a row 20 streams with 100 jobs each ( typical error rate ~ 2 % still present ) RB stress test in a job storm of 50 streams, 20 jobs each : –50% of the streams ran out of connections between UI and RB. (configuration parameter – but machine constraints) –Remaining 50% streams finished normal ( 2% error rate) –Time between job-submit and return of the command (acceptance by the RB) is 3.5 seconds. ( independent of number of streams ) NOTE: RB interrogates all suitable CE's : wide area delay-killer

44 Tony Doyle - University of Glasgow Next m/w steps (9/9/03) Status (LCG report by Ian Bird): Next LCG-1 upgrades: The same software recompiled with gcc 3.2 New VDT? Based on Globus 2.4? LCG will work on this issue. Add VOMS Add RLI? – see discussion of RLS R-GMA now seems to be off the table *Corrective Action 16/9/03* Working group to resolve data access issues: –Components exist: SRM, GFAL, RLS, gridFTP; –need to make coherent based on use-cases and integrate with MSS’s For more features we would like to apply the simple rule: Add it if and only if –it is proven to work (by EDG, LCG, together) –it adds some desirable feature or feature requested by users –it makes the user’s application setup significantly simpler or practical –it is required by new user applications Bug fixing will always have the highest priority. Target release date for 2004 system: November

45 Tony Doyle - University of Glasgow RGMA – Status (16/9/03) Running on WP3, EDG-development and EDG-application testbeds Application Deployment: 29 CEs, 11 SEs, 10 sites in 6 countries –RGMA browser access in < 1sec Monitoring scripts being run on the testbeds and results linked from the WP3 web page –http://hepunx.rl.ac.uk/edg/wp3/ Registry replication is being tested on WP3 testbed –Better performance & higher reliability required Authentication successfully tested on WP3 testbed Two known bugs remain –Excessive threads requiring GOUT machine restart New code has been developed with extensive unit tests. Now being tested on WP3 testbed This new code will support at least 90 sites –Latest Producer choosing algorithm failing to reject bad LPs – shows up intermittent absence of information Revised algorithm needs coding (localised change)

46 Tony Doyle - University of Glasgow RGMA - Users Users and Interfaces to other systems: –Resource Broker –CMS (Boss) –Service and Service Status for all EDG services –Network Monitoring & Network Cost Function –MapCenter –Logging & Bookkeeping –UK e-Science, CrossGrid and BaBar evaluating –Replica Manager –MDS (GIN/GOUT) –Nagios –Ganglia (Ranglia) Future: RB direct use of RGMA (no GOUT) –Better performance and reliability

47 Tony Doyle - University of Glasgow SRB for CMS UK eScience has been interested in SRB for several years. CCLRC has gained expertise for other projects and is collaborating with SDSC Now hosting MCAT for worldwide CMS pre-DC04 Interfaced to RAL Datastore –Service Started 1 July 2003 183,000 files registered 10 TB of data stored in system Used across 13 sites worldwide including CERN and Fermilab 30 Storage resources managed across the sites MCAT Database MCAT Server SRB A Server SRB B Server SRB Client a b cd e f g

48 Tony Doyle - University of Glasgow EDG StorageElement Not initially adopted by LCG1 Since then limited SRM functionality has been added to support GFAL –available for test by LCG Full SRMv1 functionality has been developed and is currently being integrated on internal testbed GACLs being integrated

49 Tony Doyle - University of Glasgow 622Mb/s UK Internal 2x2.5Gb/s

50 Tony Doyle - University of Glasgow OptorSim: File Replication Simulation Test P2P file replication strategies: e.g. economic models 1. Optimisation principles applied to GridPP 2004 testbed with realistic PP use patterns/policies 2. Job scheduling: Queue access cost takes into account queue length and network connectivity. Anticipate replicas needed at “close” sites using three replication algorithms. 3. Build in realistic JANET background traffic 4. Replication algorithms optimise CPU use/job time as replicas are built up on the Grid.

51 Tony Doyle - University of Glasgow Middleware, Security and Network Service Evolution Information Services [5+5 FTE] and Networking [3.0+1.5+1.5 FTE]: strategic roles within EGEE Security expands to meet reqts. Data and Workload Management continue No further configuration management development programme defined by –mission criticality (experiment requirements driven) –International/UK-wide lead –leverage of EGEE, UK core and LCG developments Redefine (EDG-based) UK Work Packages into five Development Groups ActivityCurrent Planning Security3.5 Info-Mon.4.0 Data & Storage4.0 Workload1.5 Networking3.0 TOTAL16.0 Security Middleware Networking

52 Tony Doyle - University of Glasgow Application Interfaces - Service Evolution Applications –18 FTEs: ongoing programme of work can continue –Difficult to involve experiment activity not already engaged within GridPP Project would need to build on cross-experiment collaboration – GridPP1 already has experience –GANGA: ATLAS & LHCb –SAM: CDF & D0 –Persistency: CMS & BaBar Encourage new joint developments across experiments

53 Tony Doyle - University of Glasgow Current planning based upon £19.6m Funding Scenario PPARC Review Timeline: Projects Peer Review Panel (14-15/7/03) Grid Steering Committee (28-29/7/03) Science Committee (October 03)

54 Tony Doyle - University of Glasgow Dissemination: e-Science and Web e-Science ‘All Hands’ Meeting held at Nottingham, 2-4 September 2003 –~ 500 people in total –~ 20 GridPP People attended –~ 17 GridPP Abstracts accepted –~ 10 GridPP Papers published + posters displayed – 6 GridPP Invited talks – 3 GridPP Demonstrations Next step: SC2003 (DC et al.) and a dissemination officer… GridPP Web Page Requests: Currently ~35,000 per month

55 Tony Doyle - University of Glasgow LCG Press Release… LHC Computing Grid goes Online CERN, Geneva, 29 September 2003. The world's particle physics community today announces the launch of the first phase of the LHC computing Grid (LCG-1). Designed to handle the unprecedented quantities of data that will be produced at CERN's Large Hadron Collider (LHC) from 2007 onwards, the LCG will provide a vital test-bed for new Grid computing technologies. These are set to revolutionise the way we use the world's computing resources in areas ranging from fundamental research to medical diagnosis. "The Grid enables us to harness the power of scientific computing centres wherever they may be to provide the most powerful computing resource the world has to offer" Les Robertson

56 Tony Doyle - University of GlasgowConclusions Visible progress this year in GridPP1 Management via the Project Map and Project Plan High level tasks and metrics: under control Major component is LCG –We contribute significantly to LCG and our success depends critically on LCG Middleware components on critical path w.r.t. LCG adoption Deployment – high and low level perspectives merge via monitoring/accounting Resource centre and experiment accounting are both important Today’s operations in the UK are built around a small team Future operations planning expands this team: Production Manager being appointed Middleware deployment focus on Information Service performance Security (deployment and policy) is emphasised “Production Grid” will be difficult to realise: need to start GridPP2 planning now (already underway) GridPP2 proposal: formal feedback in November Transition period for: –Middleware/Security/Networking Groups –Experiments Phase II –Production Grid Planning

57 Tony Doyle - University of Glasgow GridPP8 - Welcome Monday September 22nd 10:30-11:00 Arrive - Coffee Opening Session (Chair: Nick Brook) 11:00-11:30 Welcome and Introduction - Steve Lloyd 11:30-12:00 GridPP2 Next Stages - Tony Doyle 12:00-12:30 LCG Overview - Tony Cass 12:30-13:30 Lunch Application Developments I: Grid User Interfaces (Chair: Roger Barlow) 13:30-14:00 The GANGA Interface for ATLAS/LHCb - Alexander Soroko 14:00-14:30 GANGA for BaBar - Janusz Martyniak 14:30-15:00 UKQCD interfaces - James Perry 15:00-15:30 CMS analysis framework - Hugh Talini 15:30-16:00 Coffee Application Developments II (Chair: Roger Jones) 16:00-16:25 Control/Monitoring for LHCb Production - Gennady Kuznetsov 16:25-16:50 BaBar Grid status - Alessandra Forti 16:50-17:10 JIM and the Gridification of SAM - Morag Burgon-Lyon 17:10-17:30 UKQCD progress - Lorna Smith 17:30-17:50 CMS Status - Peter Hobson 17:50-18:10 D0 Status - Rod Walker 18:10-18:30 ATLAS Status - Alvin Tan 18:30-18:35 Logistics - Dave Newbold 19:00 Collaboration Dinner at Goldney Hall Tuesday 23rd September Bristol e-Science, GSC Operation and Tier Centre Reports (Chair: David Britton) 09:00-09:15 Bristol e-Science - Tim Phillips, Deputy Director of Information Services 09:15-09:30 HP Grid Strategy - Paul Vickers 09:30-10:00 Grid Support and Operations Centre - John Gordon 10:00-10:30 Tier-1 report - Andrew Sansum 10:30-10:45 London Tier-2 Report - Dave Colling 10:45-11:00 Southern Tier-2 Report - Jeff Tseng 11:00-11:30 Coffee Tier-2 Centre Reports and Planning (Chair: John Gordon) 11:30-11:45 Northern Tier-2 Report - Andrew McNab 11:45-12:00 ScotGrid Tier-2 Report - Akram Khan 12:00-12:30 Tier-2 Centre Development Plans - Steve Lloyd 12:30-13:30 Lunch Middleware Development (Chair: Steve Lloyd) 13:30-14:00 UK e-Science BOF session summary - Robin Middleton 14:00-14:20 Workload Management - Dave Colling 14:20-14:40 Data Management - Paul Millar 14:40-15:00 Information & Monitoring - Steve Fisher 15:00-15:20 Security - Dave Kelsey / Andrew McNab 15:20-15:40 Networking - Peter Clarke 15:40-16:00 Fabric Management - Lex Holt


Download ppt "Tony Doyle “GridPP – Year 2 to Year 3”, Collaboration Meeting, Bristol, 22 September 2003."

Similar presentations


Ads by Google