Presentation is loading. Please wait.

Presentation is loading. Please wait.

Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.

Similar presentations


Presentation on theme: "Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago."— Presentation transcript:

1 Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago

2 The Open Science Grid Building an Open Cyberinfrastructure for Science GRID’2006 Dubna, Russia June 26, 2006 Rob Gardner University of Chicago

3 6/26/2006 The Open Science Grid R. Gardner 3 Introduction What is the Open Science Grid? How do we build a sustainable facility? How is the OSG facility operated? What does the road ahead look like?

4 6/26/2006 The Open Science Grid R. Gardner 4 What…

5 6/26/2006 The Open Science Grid R. Gardner 5 Open Science Grid Evolved Grid3 to a production scale infrastructure US based infrastructure for many sciences… part of a Cyberinfrastructure Consortium framework joins efforts from disparate projects Technical principles guide deployment OSGOSG

6 6/26/2006 The Open Science Grid R. Gardner 6 OSG 0.4 Presently Deployed Production and Integration Grids Led by US LHC, but many VOs contributing > 50 sites at Universities and national labs Storage managed by VOs Production and Integration Grids Led by US LHC, but many VOs contributing > 50 sites at Universities and national labs Storage managed by VOs Using Virtual Data Toolkit (VDT) Contains Globus, Condor, and EGEE components Interoperability with LCG key goal Using Virtual Data Toolkit (VDT) Contains Globus, Condor, and EGEE components Interoperability with LCG key goal

7 6/26/2006 The Open Science Grid R. Gardner 7 Another definition…Ian Foster

8 6/26/2006 The Open Science Grid R. Gardner 8 The OSG Consortium The OSG has taken an approach of developing an Open, sustainable Grid infrastructure through partnership of many stakeholders  Universities and National Laboratories  Virtual organizations, such as the experiments of the Large Hadron Collider Governance  OSG Council, Executive Board Management  Executive Team

9 Executive Director Ruth Pordes Resources Managers Paul Avery, Albert Lazzarini Applications Coordinators Torre Wenaus, Frank Würthwein Facility Coordinator Miron Livny Education, Training, Outreach Coordinator: Mike Wilde OSG PI Miron Livny External Projects Engagement Coordinator Alan Blatecky Operations Coordinator Leigh Grundhoefer Software Coordinator Alain Roy Security Officer Don Petravick OSG Executive Board Deputy Executive Director Rob Gardner, Doug Olson OSG Project Execution

10 6/26/2006 The Open Science Grid R. Gardner 10 Principles - high level Provide guaranteed and opportunistic access to shared resources Operate a heterogeneous environment both in services available at any site and for any VO, and multiple implementations behind common interfaces Support multiple software releases at any one time Interface to Campus and Regional Grids Federate with other national/international Grids

11 6/26/2006 The Open Science Grid R. Gardner 11 Principles - drivers Delivery to the schedule, capacity and capability of participating VOs  Support for/collaboration with other physics/non- physics communities  Partnerships with other Grids - especially EGEE and TeraGrid  Evolution by deployment of externally developed new services and technologies

12 6/26/2006 The Open Science Grid R. Gardner 12 OSG and the WLCG The WLCG spans both EGEE and OSG infrastructures, and as such the OSG facility forms a core piece of the WLCG OSG delivers accountable resources (compute cycles and storage facilities) for LHC experiments OSG also federates with other Grid infrastructures such as the TeraGrid in the US Goal is to provide a seamless computing facility for researchers

13 6/26/2006 The Open Science Grid R. Gardner 13 How…

14 6/26/2006 The Open Science Grid R. Gardner 14 Building the OSG Facility Architectural decisions driven by stakeholder requirements Core middleware provisioned by the VDT team New services enter through VO-testbeds, the Integration Testbed, and the VDT Grid configured releases go through three steps:  Integration (ITB)  Provisioning (deployment preparations incl documentation)  Deployment via VO managed processes

15 6/26/2006 The Open Science Grid R. Gardner 15 OSG Service Stack Virtual Data Toolkit (VDT) Common Services NMI + VOMS, CEMon (common EGEE components), MonaLisa, Clarens, AuthZ OSG Release Cache: VDT + Configuration, Validation, VO management ATLAS Services ATLAS Services Infrastructure Applications CMS Services CMS Services Other VO Services Other VO Services NMI releases (Globus + Condor) Fig. from R. Pordes

16 6/26/2006 The Open Science Grid R. Gardner 16 OSG Service Overview Compute elements  GRAM, GridFTP, information services (GIP), monitoring, worker node client tools (eg. srmcp) Storage elements  SRM-drm, SRM-dCache (provided by VOs), v1.1 Site level services  GUMS - for privilege (authorization) mappings VO level services  VOMS and user role assignments VO edge services  Semi-persistent services & agents as needed by applications Multi-VO, common services  Monitoring repositories, Catalogs, BDII index services, etc

17 6/26/2006 The Open Science Grid R. Gardner 17 OSG Process Applications  Integration  Provision  Deploy Integration Testbed (15-20) Production (50+) sites Sao Paolo Taiwan, S.Korea ITB OSG

18 6/26/2006 The Open Science Grid R. Gardner 18 Authorization Services Site level services to support fine-grained, role- based access to resources:  GUMS - Grid User Management System - maps user proxy to local accounts based on role and group  Site admins grant access rights and privileges based on accounts  PRIMA callout from GRAM gatekeeper - assigns account based on GUMS mapping and submits to local scheduler  Roles (eg: usatlas1=production; usatlas2=software; usatlas3=users) Receives updates on mappings from VOMS

19 6/26/2006 The Open Science Grid R. Gardner 19 Authorization Process Fig. from I. Fisk

20 6/26/2006 The Open Science Grid R. Gardner 20 Information Services GIP (Generic Information Provider)  An information service that aggregates static and dynamic resource information  Produces information for use with LDAP-based Grid information systems  Glue 1.2 schema GIP use cases  LCG-OSG interoperability  GridCat cross checks Site level BDII service  Scalability  Query by LCG RB

21 6/26/2006 The Open Science Grid R. Gardner 21 Monitoring and Accounting Monalisa, site level accounting servcies (native tools), site verify checks & report, GridExerciser OSG 0.4.1

22 6/26/2006 The Open Science Grid R. Gardner 22 Operations and User Support Virtual Organization (VO)  Group of one or more researchers Resource provider (RP)  Operates Compute Elements and Storage Elements Support Center (SC)  SC provides support for one or more VO and/or RP VO support centers  Provide end user support including triage of user-related trouble tickets Community Support  Volunteer effort to provide SC for RP for VOs without their own SC, and general help discussion mail list

23 6/26/2006 The Open Science Grid R. Gardner 23 Operations Model Real support organizations often play multiple roles Lines represent communication paths and, in our model, agreements. We have not progressed very far with agreements yet. Gray shading indicates that OSG Operations composed of effort from all the support centers.

24 6/26/2006 The Open Science Grid R. Gardner 24 Imagining the road ahead…

25 6/26/2006 The Open Science Grid R. Gardner 25 OSG Release Timeline 11/03 2/05 4/05 12/05 9/05 2/06 4/06 7/06 ITB 0.1.2 ITB 0.1.6 ITB 0.3.0 ITB 0.3.4 ITB 0.3.7 OSG 0.2.1 OSG 0.4.0 OSG 0.4.1 OSG 0.6.0 ITB 0.5.0 Integration Production

26 6/26/2006 The Open Science Grid R. Gardner 26 Middleware Release Roadmap OSG 0.6.0 Fall 2006 Accounting; Squid (Web caching in support of s/w distribution + database information); SRM V2+AuthZ; CEMON-ClassAd based Resource Selection. Support for MDS-4. Possible requirement to use WS- GRAM. Edge Services. OSG 0.8.0 Spring 2007 Just in time job scheduling, Pull-Mode Condor-C, Support for sites to run Pilot jobs and/or Glide-ins using glexec for identity changes. OSG1.0 End of 2007

27 6/26/2006 The Open Science Grid R. Gardner 27 Edge Services Framework Goal is to support deployment of VO services. Based on XEN virtual machines and service images Site supports a XEN server and VO loads images. US-ATLAS has done a proof of concept. FNAL recently implemented their LCG CE using a virtual machine

28 6/26/2006 The Open Science Grid R. Gardner 28 Fig. from I. Fisk Storage Authorization Add storage authorization callout gPlazma for SRM-dCache SEs

29 6/26/2006 The Open Science Grid R. Gardner 29 Accounting Fig. from I. Fisk

30 6/26/2006 The Open Science Grid R. Gardner 30 Concluding thoughts…

31 6/26/2006 The Open Science Grid R. Gardner 31 Conclusions & thoughts… The OSG is an open, collaborative production- scale Grid infrastructure for Science It has evolved from a set of simple principles stemming from hard, realistic requirements of the contributing stakeholders It is further evolving into a managed, core project that lives in a cyber-mesh of distributed services from large scale infrastructures and facilities Managing its change with increasing workloads demand will be the real challenge!

32 6/26/2006 The Open Science Grid R. Gardner 32 Спасибо ! With acknowledgements to contributions from my colleagues: Jerome Lauret (BNL/Star/OSG) - original speaker Ruth Pordes (Fnal/OSG) Ian Fisk (Fnal/CMS/OSG) Doug Olson (LBNL/Star/OSG)


Download ppt "Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago."

Similar presentations


Ads by Google