Role, Objectives and Migration Plans to the European Middleware Initiative (EMI) Morris Riedel Jülich Supercomputing.

Slides:



Advertisements
Similar presentations
European Middleware Initiative (EMI) 101. State-Of-The-Art in Grids Today High Performance Computing (HPC) InfrastructuresHigh Throughput Computing (HTC)
Advertisements

GPE4UNICORE Grid Programming Environment for UNICORE
March 6 th, 2009 OGF 25 Unicore 6 and IPv6 readiness and IPv6 readiness
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI AAI in EGI Status and Evolution Peter Solagna Senior Operations Manager
Data Grid: Storage Resource Broker Mike Smorul. SRB Overview Developed at San Diego Supercomputing Center. Provides the abstraction mechanisms needed.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
CREAM-CE status and evolution plans Paolo Andreetto, Sara Bertocco, Alvise Dorigo, Eric Frizziero, Alessio Gianelle, Massimo Sgaravatto, Lisa Zangrando.
Grid and Cloud Computing UNICORE Dr. Guy Tel-Zur
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Data Grids: Globus vs SRB. Maturity SRB  Older code base  Widely accepted across multiple communities  Core components are tightly integrated Globus.
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
EMI INFSO-RI European Middleware Initiative (EMI) Standardization and Interoperability Florida Estrella (CERN) Deputy Project Director.
EUFORIA FP7-INFRASTRUCTURES , Grant Scientific Workflows Kepler and Java API 4 HPC/GRID Hands on tutorial - ITM Meeting 2009 Michal Owsiak.
Experiences with using UNICORE in Production Grid Infrastructures DEISA and D-Grid Michael Rambadt
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
© 2008 Open Grid Forum Production Grid Infrastructure WG PGI Reference Model Towards an infrastructure interoperability reference model Morris Riedel (FZJ.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Grid Engine Riccardo Rotondo
© 2006 Open Grid Forum OGF HPC-Basic Profile Application Interoperability Demonstration GIN-CG.
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
JRA1/Job Submission and Monitoring Moreno Marzolla on behalf of JRA1/Job Submission Task INFN Sezione di Padova,
Why do we need PGI? Shahbaz Memon Jülich Supercomputing Centre (JSC)
Forschungszentrum Jülich in der Helmholtz-Gesellschaft Experiences with using UNICORE in Production Grid Infrastructures DEISA and D-Grid Michael Rambadt.
Presented by: La Min Htut Saint Petersburg Marine Technical University Authors: Alexander Bogdanov, Alexander Lazarev, La Min Htut, Myo Tun Tun.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
Interoperability in OMII – Europe (using the new standard compliant SAML-based VOMS to handle attribute-based authz.) Morris Riedel (FZJ), Valerio Venturi.
Tutorial: Building Science Gateways TeraGrid 08 Tom Scavo, Jim Basney, Terry Fleury, Von Welch National Center for Supercomputing.
Tools for collaboration How to share your duck tales…
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
Middleware for Campus Grids Steven Newhouse, ETF Chair (& Deputy Director, OMII)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Authentication and Authorisation for Research and Collaboration Peter Solagna Milano, AARC General meeting Current status and plans.
Identity Management in DEISA/PRACE Vincent RIBAILLIER, Federated Identity Workshop, CERN, June 9 th, 2011.
EMI INFSO-RI Accounting John Gordon (STFC) APEL PT Leader.
Easy Access to Grid infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) Dr. Mathias Stuempert (KIT-SCC, Karlsruhe) EGEE User Forum 2008 Clermont-Ferrand,
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
EMI INFSO-RI European Middleware Initiative (EMI) Alberto Di Meglio (CERN)
SHIWA and Coarse-grained Workflow Interoperability Gabor Terstyanszky, University of Westminster Summer School Budapest July 2012 SHIWA is supported.
NG07 summary: Grid state of art, solution, infrastructure, and KnowARC topics Weizhong Qiang November 2, 2007.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Evolution of AAI for e- infrastructures Peter Solagna Senior Operations Manager.
European Middleware Initiative (EMI) Alberto Di Meglio (CERN) Project Director.
Introduction to UNICORE Morris Riedel, Forschungszentrum Jülich (FZJ), Germany OMII – Europe Training, Edinburgh, UK 11th July 2007 – 12th July
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks gLite – UNICORE interoperability Daniel Mallmann.
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
SHIWA Simulation Platform (SSP) Gabor Terstyanszky, University of Westminster EGI Community Forum Munnich March 2012 SHIWA is supported by the FP7.
The European KnowARC Project Péter Stefán, NIIF/HUNGARNET/KnowARC TNC2009, June 2009, Malaga.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
© 2008 Open Grid Forum PGI - Information Security in the UNICORE Grid Middleware Morris Riedel (FZJ – Jülich Supercomputing Centre & DEISA) PGI Co-Chair.
© 2008 Open Grid Forum Production Grid Infrastructure (PGI) 101 Morris Riedel, Balazs Konya, Moreno Marzolla OGF PGI Working Group Co-Chairs.
A European Grid Technology Achim Streit Jülich Supercomputing Centre (JSC)
© 2006 Open Grid Forum UNICORE – DMI Implementation Mohammad Shahbaz Memon (Co-Chair) Jülich Supercomputing Center.
European Middleware Initiative (EMI)
EMI Interoperability Activities
Study course: “Computing clusters, grids and clouds” Andrey Y. Shevel
EMI-ES demo Emidio Giorgio INFN Catania
Grid Engine Riccardo Rotondo
Grid Engine Diego Scardaci (INFN – Catania)
Morris Riedel (FZJ – Jülich Supercomputing Centre & DEISA)
gLite The EGEE Middleware Distribution
Sergio Andreozzi Laurence Field Balazs Konya
Presentation transcript:

Role, Objectives and Migration Plans to the European Middleware Initiative (EMI) Morris Riedel Jülich Supercomputing Centre (JSC) & DEISA

2 Outline

3  UNICORE 101 & Usage Examples  Role as HPC-driven Grid Middleware  Traditional role and emerging role in HTC  Objectives and Migration Plans  Migration to Common Client API  Migration to Common EMI Security Infrastructure  Common Registry Service Objective  PGI-compliance for Compute and Data Objective  Common Attribute-based Authorization  Moving towards potential EMI Architecture  Other Potential Objectives  Summary Outline

4 UNICORE 101

5 Guiding Principles, Implementation Strategies  Open source under BSD license with software hosted on SourceForge  Standards-based: OGSA-conform, WS-RF 1.2 compliant  Open, extensible Service-Oriented Architecture (SOA)  Interoperable with other Grid technologies  Seamless, secure and intuitive following a vertical end-to-end approach  Mature Security: X.509, proxy and VO support  Workflow support tightly integrated while being extensible for different workflow languages and engines for domain-specific usage  Application integration mechanisms on the client, services and resource level  Variety of clients: graphical, command-line, API, portal, etc.  Quick and simple installation and configuration  Support for many operating systems (Windows, MacOS, Linux, UNIX) and batch systems (LoadLeveler, Torque, SLURM, LSF, OpenCCS)  Implemented in Java to achieve platform-independence

6 Clients & APIs

7 Usage in Supercomputing Slide courtesy of Alexander Moskovsky (Moscow State University)

8 Usage in National Grids Slide courtesy of Michael Sattler, Alfred Geiger (T-Systems SfR)Slide courtesy of André Höing (TU Berlin)

9 Usage in Commercial Areas Slide courtesy of Alfred Geiger (T-Systems SfR) Slide courtesy of Bastian Baranski (52° North & University Münster)

10 Role as HPC-Driven Grid Middleware

11  Used in  DEISA (European Distributed Supercomputing Infrastructure)  National German Supercomputing Center NIC  Gauss Center for Supercomputing (Alliance of the three German HPC centers & official National Grid Initiative for Germany in the context of EGI)  PRACE (European PetaFlop HPC Infrastructure) – starting-up  Traditionally taking up major requirements from i.e.  HPC users (i.e. MPI, OpenMP)  HPC user support teams  HPC operations teams  …and via SourceForge Platform Grid driving High Performance Computing (HPC)

12 UNICORE Architecture Overview

13  UNICORE can be used in non HPC-focussed environments  German National Grid D-Grid and some of there communities  High Throughput Computing (HTC) possible with UNICORE  EMI will be possibly deployed on many HTC-driven Grids  Role towards the European Middleware Initiative (EMI)  Stronger support for distributed data and storage technologies  Aligning with the key features of other EMI middleware such as ARC & gLite (e.g. pool accounts)  Integrate requirements arising from HTC-driven environments EMI and High Throughput Computing (HTC)

14 Objectives and Migration Plans

15  Adopt and drive efforts of the OGF PGI-WG General Paradigm: Adopting Open Standards Data Storage Ressource Manager (SRM) WS-Database Access and Integration Service (DAIS) GridFTP Compute OGSA-Basic Execution Service (BES) Job Submission and Description Language (JSDL) Security Security Assertion Markup Language (SAML) X.509 Public Key Infrastructure (PKI) Information Grid Laboratory Uniform Environment (GLUE) Production Grid Infrastructure (PGI) WG Interoperability by Design & Common Interfaces Feedback Experience from Implementations & Production Use +

16 Migration to Common Client API  Offer Higher Level Application Programming API (HILA) as potential common client API in EMI  Easy programming API with non UNICORE-based Grid abstractions (e.g. Grid, Site, etc.)  Potential integration of emerging standards of the OGF Production Grid Infrastructure (PGI) working group  Access to all PGI-compliant Grid middlewares and thus to ARC (e.g. A-Rex) and gLite (e.g. computing element) once PGI is adopted  Potential access of PGI-compliant middleware (UNICORE, ARC, gLite, …) from other available clients as well

17 Migration to Common EMI Security Infrastructure  Take up of common EMI security infrastructure  Aligned with efforts of the OGF PGI working group  Move away from Grid Security Infrastructure (GSI)  Enables a broader access from non-Grid environments (i.e. Web) & broader support for tooling to satisfy industry needs  Offer Gateway as a common EMI authentication component  Potentially merging functionality with gLite trust manager, etc.  Exploring potentials for Shibboleth-based EMI federations

18 Common Registry Service Objective  Goal: common registry service for UNICORE, ARC & gLite  Outphasing of the WS-RF-based UNICORE Service Registry

19 PGI-compliance for Compute & Data Objective  Take up of emerging PGI standards driven by EMI for compute and data interfaces to access also gLite & ARC  Parallel Interfaces to proprietary UNICORE Atomic Services PGI

20 Common Attribute-based Authorization  Take up of a common EMI attribute-based authorization service support and open interfaces for Virtual Organizations  Push of Security Assertion Markup Language (SAML) usage PGI

21 Moving towards potential EMI Architecture PGI Common PGI API Common VO Service Common Service Registry Common Gateway

22 Other Potential Objectives  Workflow (maybe out of EMI scope, but important)  Workflow functionality make job chains possible across multiple sites  Workflow Engine & Service Orchestrator good base for EMI  Strong execution backend XNJS and TSI  Provide support for many operating and batch systems with continued development since ~10 years  Strong MPI support may (will) become highly relevant for EMI in the “economy of scales”  we reached peta-scale already…

23 Summary

24  All components are subject to be harmonized  Security  UNICORE Gateway (i.e. authentication)  UNICORE VO Service (UVOS) (i.e. Attribute Authority)  XACML Entity (i.e. attribute-based authorization decisions)  Compute  XNJS, UNICORE Atomic Services & OGSA-BES (i.e. execution)  Workflow Engine to be compliant with EMI execution interface  Information  Service Registry (i.e. information about available Grid services)  Data  UNICORE Atomic Services (i.e. data) Summary of Components of Interest for EMI

25  UNICORE is a ready-to-run European Grid Technology including client and server software highly relevant for EMI  Provides a seamless, secure, and intuitive access to different distributed computing and data resources  All components are available as open source under BSD License on SourceForge & support for science and industry  Traditional role as HPC-driven middleware and more recently also usable in Grid environments (i.e. High Throughput Computing)  Commitment to open standards to support a common set of interfaces and protocols of emerging components of the EMI General Summary

26 software, source code, documentation, tutorials, mailing lists, community links, and more: