Presentation is loading. Please wait.

Presentation is loading. Please wait.

RomeWorkshop on eInfrastructures 9 December 2003 - 1 LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December.

Similar presentations


Presentation on theme: "RomeWorkshop on eInfrastructures 9 December 2003 - 1 LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December."— Presentation transcript:

1 RomeWorkshop on eInfrastructures 9 December 2003 - 1 LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December 2003

2 RomeWorkshop on eInfrastructures9 December 2003 - 2 The Large Hadron Collider Project 4 detectors CMS ATLAS LHCb Requirements for world-wide data analysis Storage – Raw recording rate 0.1 – 1 GBytes/sec Accumulating at 5-8 PetaBytes/year 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs Requirements for world-wide data analysis Storage – Raw recording rate 0.1 – 1 GBytes/sec Accumulating at 5-8 PetaBytes/year 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs

3 RomeWorkshop on eInfrastructures9 December 2003 - 3 LHC Computing Hierarchy Emerging Vision: A Richly Structured, Global Dynamic System Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~100-1500 MBytes/sec 2.5-10 Gbps Tens of Petabytes by 2007-8. An Exabyte ~5-7 Years later. ~PByte/sec ~2.5-10 Gbps Tier2 Center ~2.5-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 0.1 to 10 Gbps Physics data cache

4 RomeWorkshop on eInfrastructures9 December 2003 - 4 Introduction – the LCG Project LHC Computing Grid (LCG) is a grid deployment project Prototype computing environment for LHC Focus on building a production-quality service Learn how to maintain and operate a global scale production grid Gain experience in close collaboration between regional (resource) centres Understand how to integrate fully with existing computing services  Building on the results of earlier research projects; Learn how to move from test-beds to production services  Address policy-like issues needing agreement between collaborating sites

5 RomeWorkshop on eInfrastructures9 December 2003 - 5 The LCG Deployment Board Grid Deployment Board (GDB) set up to address policy issues requiring agreement and negotiation between resource centres Members: country representatives, applications, and project Sets up working groups Short term or ongoing Bring in technical experts to focus on specific issues GDB approves recommendations from working groups Groups: Several that outlined initial project directions (operations, security, resources, support) Security – standing group – covers many policy issues Storage management Grid Operations Centre task force User Support group

6 RomeWorkshop on eInfrastructures9 December 2003 - 6 Policies and procedures 6 documents approved to date Security and Availability Policy for LCG Prepared jointly with GOC task force Approval of LCG-1 Certificate Authorities Audit Requirements for LCG-1 Rules for Use of the LCG-1 Computing Resources Agreement on Incident Response for LCG-1 User Registration and VO Management 4 more being written (with GOC group) LCG Procedures for Resource Administrators LCG Guide for Network Administrators LCG Procedure for Site Self-Audit LCG Service Level Agreement Guide

7 RomeWorkshop on eInfrastructures9 December 2003 - 7 Security and Availability Policy Prepared jointly with GOC group Objectives Agreed set of statements Attitude of the project towards security and availability Authority for defined actions Responsibilities on individuals and bodies Promote the LHC science mission Control of resources and protection from abuse Minimise disruption to science Obligations to other network (inter- and intra- nets) users Broad scope: not just hacking Maximise availability and integrity of services and data Resources, Users, Administrators, Developers (systems and applications), and VOs Does NOT override local policies Procedures, rules, guides etc contained in separate documents

8 RomeWorkshop on eInfrastructures9 December 2003 - 8 Policy: Ownership, maintenance and review The Policy is Prepared and maintained by Security Group and GOC Approved by GDB Formally owned and adopted as policy by SC2 Technical docs implementing or expounding policy Procedures, guides, rules, … Owned by the Security Group and GOC timely and competent changes GDB approval for initial docs and significant revisions Must address the objectives of the policy Review the top-level policy at least every 2 years Ratification by SC2 via GDB if major changes required

9 RomeWorkshop on eInfrastructures9 December 2003 - 9 User Registration & VO Management User registers once with LCG (and not at individual sites) Accepts User Rules Gives the agreed set of personal data Agreement on a minimal set was important achievement Requests to join one VO/Experiment Sites need robust VO Registration Authorities (RA) to check The user actually made the request User is valid member of the institute & experiment That all user data looks reasonable User data is distributed to all LCG sites Work needed on more robust scaleable procedures for 2004

10 RomeWorkshop on eInfrastructures9 December 2003 - 10 Approach to Service SLAs Formal Contract with GOC? – No, because GOC is not (likely to be) a legal body GOC will not (be likely to) have any formal powers over Service Providers GOC will not (be likely to) pay for any Services So difficult for GOC to enforce a traditional SLA Instead, prefer a virtual contract between Service Provider and the LCG Grid Community Any Centre wishing to provide a Service must publish its design levels for the specified service level parameters of that Service LCG will then monitor the actual levels achieved and publish them so they may be compared with the design levels Service Providers (Centres) will then compete on quality or possibly quality/cost, either to attract work or enhance reputation

11 RomeWorkshop on eInfrastructures9 December 2003 - 11 Form of SLA One for each instance of a LCG Service To be published on the GOC website in standard format exactly as provided by the Service Administrator Format still to be agreed, but likely to contain as a minimum Identification of Service (type, release, etc) Statement on compliance with Security and Availability Policy (standard wording) Limitations on use (if any) Designed Availability Designed Reliability Designed Performance (Service-specific; to be defined for each type of Service)

12 RomeWorkshop on eInfrastructures9 December 2003 - 12 Sites in LCG-1 – 21 Nov

13 RomeWorkshop on eInfrastructures 9 December 2003 - 13 Future Challenges and Issues

14 RomeWorkshop on eInfrastructures9 December 2003 - 14 Challenges – 1 Authentication issues Must agree the future PMA bodies for CA’s EGEE likely to take over this role for Europe Collaborate with GridPMA.org, TERENA and GGF Online CA services, credential repositories KCA, SLAC Virtual Smart Card, MyProxy, … Need to define best practice and minimum standards Authorization developments VOMS (EDG) to be implemented soon in LCG Confirms membership of VO, groups, roles local AuthZ (EDG LCAS/LCMAPS, US CMS VOX) and VOMS-aware services are needed To give the experiments the functionality they require BUT, active research area – how this maps to local infrastructures

15 RomeWorkshop on eInfrastructures9 December 2003 - 15 Challenges – 2 Collaboration between resource providers: Risks involved in opening resources to wide community – essential to build and maintain trust Policies must be complete and enforced Technical solutions not yet there to implement and enforce Must maintain open access to all collaborators Successful so far Scalable solution for selective access needs tools and services that do not yet exist For LCG – issues of charging are not directly relevant But do need accounting Will be important for EGEE

16 RomeWorkshop on eInfrastructures9 December 2003 - 16 Challenges – 3 Interoperability between grids (national, international, community, …) Must understand what this means at all levels (political, technical,..) Many very basic technical challenges to address Status today Need same middleware Need same information schema Need same usage policies Need to map users in compatible ways Need to agree security, access, etc.

17 RomeWorkshop on eInfrastructures9 December 2003 - 17 Summary LCG has made significant progress in understanding issues Particularly related to security and access Much more to do Many things not needed within a single community will become important for EGEE – e.g. charging and cost of services Real SLAs – EGEE will address, LCG will be a customer Federating grids – in all guises Not really understood at any level Essential to have forum where these issues can be addressed


Download ppt "RomeWorkshop on eInfrastructures 9 December 2003 - 1 LCG Progress on Policies & Coming Challenges Ian Bird IT Division, CERN LCG and EGEE Rome 9 December."

Similar presentations


Ads by Google