Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab.

Similar presentations


Presentation on theme: "1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab."— Presentation transcript:

1 1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab

2 2 Outline OSG Middleware The Virtual Data Toolkit Current Status Next Steps

3 3 Middleware on the OSG The Middleware running on the Open Science Grid includes both OSG supported and VO supported components. OSG “concern” covers both with different degrees of attention and effort, in order to operate a viable, performant distributed facility. VOs are responsible for and have control over their end-to-end distributed system using the OSG infrastructure. OSG Middleware Includes the support and evolution of the Virtual Data Toolkit. Provides common services and components for multiple VOs to use. Is deployed on resources owned by members of the OSG Consortium and interfaces to the existing installations of OS, utilities and batch systems. Has a ‘non-root’ installation as well as a root installation for farms. VO Middleware Includes services that one VO uses (and/or shares through explicit agreement with other VOs). “on-the-fly” deployment is encouraged.

4 4 OSG Middleware Infrastructure Applications VO Middleware Core grid technology distributions: Condor, Globus, Myproxy Virtual Data Toolkit (VDT) core technologies + software needed by stakeholders: e.g.VOMS, CEMon VDS, MonaLisa, VO-Privilege. OSG Release Cache: VDT + OSG specific configuration + utilities. ATLAS Panda, DQ etc CMS cmssw, LFC, etc. Bio blast, charmm etc. LIGO LDR, Catalogs etc. User Science Codes and Interfaces Existing Operating, Batch systems and Utilities.

5 5 US LHC contributions to OSG Middleware Make priorities and schedule clear for the common components and services.  E.g. Interoperable Information publishing, Site Functional Tests, Participate in the testing of new OSG releases. Contribute middleware that they develop or adopt to be used by multiple VOS.  E.g. MonaLisa, Panda? Use and harden new components before they are part of the common middleware.  E.g. gLite Resource Broker Develop middleware and applications to be installed and configured “on-the-fly” and without the need for root privilege.

6 6 Software Release Process Gather requirements Build software Test Validation test bed ITB Release Candidate Integration test bed ( test services, system, applications,interoperability) OSG Release Time Day 0 Day ~180 Goals: Decrease errors in production. Make installations easy, quick and foolproof. Make releases and updates faster. Support different sub-sets of software for different purposes. Integration: On grid of ~15 sites. Includes application validation. Responsible for writing user and administrator documentation. Ongoing activity e.g. for patches etc.

7 7 Integrated Security Security of deployed software and software deployment through:  Management controls ie agreements of trust, knowing the software provider etc.  Operations controls ie resource administrators watching and auditing their systems closely.  Technical controls - auditing the installations for versions and configuration information. Urgency of security patches determines timeline for testing and deployment.  Resource and service owners have authority and responsibility for security on their resources and services.  VO “owners” expected to have security process for VO middleware.

8 8 Software Support For the 5 years of the OSG project the OSG team will :  Provide front line support for all OSG middleware.  Respond to security issues.  Work with the software providers to solve problems reported by administrators and users.  Communiate and coordinate with developers of new software. Support Issues:  Some of the s/w is supported as best effort.  Some of the s/w has support funding for the short term only. Current plans:  “ Open Source” like support is ok for some components.  OSG Consortium members may step up to provide support for others.  OSG will have to consider support within the project.

9 9 The Virtual Data Toolkit OSG supports the VDT for the use of its own stakeholders as well as other projects, The VDT integrates, builds, tests and distributes binary releases of common middleware for use on a grid based distributed facility. Integration of the software packages includes:  Dealing with the dependencies and differences in version dependencies between software packages.  Providing easy installation using Pacman (which installs and configures) and RPM (installs only)  Automating the installation configuration scripts for defaults and site specific needs.  Making different functional packages ie Client, WN, Server, VOMS. Testing everything on thirteen Linux OS platforms (and growing) using NMI build and test facility.

10 10 Software in current version of VDT. Security:  VOMS (VO membership), Prima- GUMS (local authorization), mkgridmap (local authorization), MyProxy (proxy management), GSI SSH, CA CRL updater Monitoring:  MonaLISA, Site-Verify Accounting:  OSG Gratia Support Infrastructure:  Apache, Tomcat, MySQL (with MyODBC), Non-standard Perl modules, Wget, Squid, Logrotate, Configuration Scripts Job Management:  Condor, including Condor-G & Condor-C, Globus GRAM Data Management:  GridFTP; RLS (replication location); DRM storage management; Globus RFT (reliable file transfer) Information Services:  Globus MDS; GLUE schema & providers; gLite CEMon Client tools  Virtual Data System; SRM clients (V1 and V2); UberFTP (GridFTP client) Developer Tools  PyGlobus; PyGridWare Testing  NMI Build & Test; VDT Tests Licences: All s/w can be freely redistributed. All BSD or GNU licences except: Monalisa kernel, Pacman copyright (and coming up tclGlobus).

11 11 Support for S/W in the VDT ComponentOwnerContact ApacheApache Foundationhttp://www.apache.org/ Berkeley Disk Resource Manager (DRM)LBNL SDM ProjectArie Shoshani CEMonEGEEClaudio Grandi/JRA1 Certificate Authorities in the VDTOSGAlain Roy Certificate ScriptsLBNLDoug Olson Clarens & jClarensCaltech/US-CMSConrad Steenberg CondorCondor ProjectMiron Livny EDG Make GridmapEGEE Claudio Grandi/JRA1 Fault Tolerant ShellCondor ProjectMiron Livny Fetch CRLEUGridPMADavid Groep Generic Information Provider (GIP)UofIowaShaowen Wang GlobusGlobus ProjectIan Foster GLUE information from Laurence FieldWLCGLaurence Field GLUE SchemaEGEESergio Andreozzi/JRA1 GratiaFermilabPhilippe Canal GUMSBNLJohn Hover

12 12 And.. ComponentOwnerContact KX509NMIUniversity of Michigan MonALISACaltech/US-CMSIosif Legrand MyProxyNCSAmyproxy-users@ncsa.uiuc.edu list. NetloggerLBNLBrian Tierney PacmanBUSaul Youssef PRIMAFermilabGabriele Garzoglio PyGlobusLBNLKeith Jackson RLSGlobusCarl Kesselman Squidhttp://www.squid-cache.orgsquid-bugs@squid-cache.org. SRMCPFermilabTimur Perelmutov Tclon sourceforgehttp://tcl.sourceforge.net TomCatApache Foundation http://tomcat.apache.org UberFTPNCSAgridftp@ncsa.uiuc.edu. VDT – scripts, utilities OSGAlain Roy Virtual Data System (VDS)UofC, ISIIan Foster, Carl Kesselman VOMS and VOMS AdminEGEEVincenzo Ciaschini/JRA1

13 13 Current Status OSG 0.4.1 + VO updates supporting SC4, CMS CSA06 and ATLAS data challenges.  Storage Elements are provided by the experiments.  Accounting (Gratia), Information Services, gLite Client being installed on top of OSG 0.4.1. US LHC mainly uses Tier-1 and Tier-2 sites, with some sharing between the experiments.  Opportunistic use of other sites for simulations ongoing. Globus security patch deployment took longer than we are happy with and caused some problem in deployment. (we are addressing this).

14 14 Next Steps - main deliverables Add dCache/SRM packaged in RPMs and tested on 15 sites with shared use (by LHC and non-LHC VOs) Grid-wide use of Gratia Accounting. VO-based resource selection based on CEMON. First version of Edge-Services - Virtual machine (XEN) sandbox for VO services at a site. First version of glexec integrated with OSG security infrastructure (joint project between EGEE and OSG). Provide smooth VDT updates to a running production service:  Goals: In-place vs. on-the-side; Preserve old configuration while making big changes.; Make updates an hour not a days work.

15 15 OSG 0.6.0 TaskPriority Gratia1 Improve ease of upgrade of one VDT version to another1 BDII2 About 50 small bug fixes and enhancements2 New GIPs2 gLite client tools for interoperability purposes2 New VDS2 dCache/SRM as "RPM"3 tclGlobus3 Update platforms we build on3 New Globus - WSGRAM and GT43 CEMON 4 SQUID 4 Globus Virtual Workspaces,for Edge Services Framework5 Job managers/GIPs for Sun Grid Engine5 Switch from MySQL to MySQL-Max5 Upgrade Monalisa to the latest version5 MDS 45 VOMRS5 glexec6 Grid Exerciser6 Production release of the VDT (1.4.0)6 (Delayed) Release in Feb’07 to allow for more improvements (and complete OSG Project Plans). Expect to rely on VDT 1.4.0 or 1.6.0 (stable release series). Some components have early installs on OSG 0.4.1.

16 16 Summary OSG aims to meet the needs of the LHC experiments through a combination of middleware provided by the project, the Tier-1 centers and the experiments themselves. The OSG infrastructure facilitates the experiments deploying their own services and middleware over the common stack. OSG is responsible for the VDT - the core middleware, part of which is also supported for the EGEE and the WLCG. In the next year storage management, VO-Sandboxed and “just-in-time” workload management services will be added to the VDT as well as improvements in installation and configuration. An additional focus of the next year will be on management and metrics.


Download ppt "1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab."

Similar presentations


Ads by Google