Presentation is loading. Please wait.

Presentation is loading. Please wait.

Web: Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software - HPC-BP.

Similar presentations

Presentation on theme: "Web: Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software - HPC-BP."— Presentation transcript:

1 Web: Email: Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software - HPC-BP Interoperability Tutorial OGF28, Munich Steve Crouch, David Wallom, Matteo Turilli, Morris Riedel, Shahbaz Memon, Balazs Konya, Gabor Roczei, Peter Stefan, Andrew Grimshaw, Mark Morgan, Katzushige Saga, Justin Bradley, Richard Boardman

2 Web: Email: Objectives To give participants practical experience of: o Using individual middleware clients to submit jobs to HPC- BP compliant services o Using the HPC-BP interop demo framework, used for previous HPC-BP demos, to submit jobs to HPC-BP compliant services To give participants opportunity (and starting point) to learn about: o Basic techniques and approaches for interoperability – what do I need, and how can I do this? o Some of the limitations of standards support across middlewares – what can’t I do?

3 Web: Email: Tutorial Approach ‘Presentation-lite’ Learn at your own pace via online web tutorial… …or follow my lead Pragmatic Generous in terms of time Tutorial remains available after OGF28 Ask for help!

4 Web: Email: Schedule Session 1: Using individual clients to invoke HPC-BP services o Overview of the demo + demo, Introduction to GridSAM o Download, Install and Configure GridSAM o Submit a Trivial Compute-only JSDL Job to HPC-BP Compliant Services o Download, Build and Configure the BES++ Client o Running the BES++ Client Against HPC-BP Compliant Services Session 2: Using HPC-BP demo framework to invoke multiple HPC-BP services simultaneously o Download, Install and Configure the Demo Framework o Running the Demo Against Multiple HPC-BP Compliant Services o The Demo in Detail: Adding Another Endpoint to the Demo

5 Web: Email: OMII-UK & Open Standards

6 Web: Email: The Interoperability Demonstrator

7 Web: Email: Background Motivation: o Researchers are often reaching the limits of locally available resources to conduct research o They are beginning to realise the potential of using much larger- scale resources o Compute resources are becoming more numerous and available across Europe However, using different Grid middleware deployments is traditionally difficult o Middleware clients for different deployments not compatible o Require different security policies/configuration for each

8 Web: Email: Background Possible solutions: o Maintain infrastructure that enables use of different clients for each middleware – interoperation Not scalable - user learning curve, operation and maintenance o Each middleware supports a common service interface, enabled through adoption of accepted open standards – interoperability Need only learn, use and maintain single client infrastructure Still leaves security! What can be practically achieved, in terms of interoperability, with middlewares that adopt OGF compute-related standards? o What is possible? o Limitations? Demonstrate through proof-of-concept, client-side, application- focused demo

9 Web: Email: History Initiated by UK National Grid Service, OMII-UK and FZJ Initially shown at OGF27, Banff, Canada, Oct 09 SuperComputing, Nov 09 ETSI Plugtests, FZJ, UK AHM, Dec 09 GIN-CG, OGF28, Mar 10 Demonstrators: David Wallom, Peter Stefan, Morris/Shahbaz Memon, Steve Crouch Video available at

10 Web: Email: Compute Related Standards - OGF Job Management OGSA-BES (GFD 108) HPC Domain Specific Profile HPC Basic Profile (GFD 114) Architecture OGSA EMS Scenarios (GFD 106) Use Cases Grid Scheduling Use Cases (GFD 64) Education ISV Primer (GFD 141) Agreement WS-Agreement (GFD 107) Programming Interface DRMAA (GFD 22/133) Programming Interface SAGA (GFD 90) Accounting Usage Record (GFD 98) Information GLUE Schema 2.0 (GFD. 147) Job Definition File Transfer HPC File Staging (GFD 135) Job Description JSDL (GFD 56/136) Application Description HPC Application (GFD 111) Application Description SPMD Application (GFD 115) Job Parameterization Parameter Sweep (GFD. 149) Extend Uses Produces Describes Supports Profiles

11 Web: Email: Standards/Data Protocols/Security Supported Standards: o HPC Basic Profile v1.0 OGSA BES (Basic Execution Service) v1.0 JSDL (Job Submission Description Language) v1.0 HPC Profile Application Extension v1.0 o HPC File Staging Profile – UNICORE, GridSAM Data protocols: o UNICORE, ARC, BES++ – ftp o GridSAM – GridFTP Security: o Direct middleware -> certificate CA trust (just import CAs)

12 Web: Email: Participation Currently: o DEISA/FZJ – UNICORE, SuSE, AMD 64-bit, 1 core o NorduGrid/NIIF – ARC NOX Release, Debian Linux, i686, 16 core o UK NGS/OMII-UK – GridSAM, Scientific Linux 4.7, AMD 64-bit, 256 core o NAREGI-NII/Platform Computing – BES++, 2 nodes Coming soon: o University of Virginia Campus Grid – GENESIS2, Ubuntu Linux, i686, 8 core o POZNAN Supercomputing Centre – SMOA Computing Platform Computing BES++ Client used as interop client

13 Web: Email: Example Application: Plasma Charge Minimization Provided by David Wallom, NGS Undergraduate project Total system energy minimization of point charges around the surface of a sphere Three different applications o Pre processing – generate input files o Main processing – parallel distributed processing o Post-processing – choose optimal solution

14 Web: Email: System Requirements System requirements: o Linux - see the Linux client pre-requisites in OMII-UK Development Kit supported platformsOMII-UK Development Kit supported platforms o Sun Java JDK 1.6 or above o C compiler - gcc and related development libraries o Lexical analyser - flex o Parser generator - bison Soon to appear on OGF Forge – hopefully by end of week


16 Web: Email: Endpoint Configuration # UNICORE interop config file endpoint_file=unicore.xml application_type=HPCProfileApplicatio n application_type_namespace=http://sc hpcpa working_dir= data_mode=ftp data_input_base= data_output_base= minem_install=/tmp/minem myproxy=no hpcfsp=yes hpcfsp_username=interopdata hpcfsp_password=89zukunft() auth_utoken=yes auth_x509=yes auth_x509_credential=auth/client.pem auth_x509_keypass=not_used auth_x509_cert_dir=auth/certificates auth_utoken_username=ogf auth_utoken_password=ogf

17 Web: Email: How it Fits Together… BES++ Client BES++ Client UNICORE GridSAM ARC FTP GridFTP FTP Client Job Service Data Service Minem Application minem- MyProxy Minem Security Service 1. Create Minem input files 1 2. Generate JSDLs from template 2 7. Select best result 7 8. Generate/upload image to web server 8 5. Monitor jobs until completion 5 BES++ FTP 4. Submit JSDLs across middlewares 4 6. Download output files 6 3. Upload input files 3 Minem

18 Web: Email: The Demo…

19 Web: Email: Future Work Standards integration: o Integrate GENESIS II and SMOA Computing o Replacement of BES++ Client with SAGA SAGA BES adapter currently in development! Schedule across BES/non-BES endpoints (e.g. Globus) o GLUE2 (e.g. using OMII-UK Grimoires software) Service discovery (static) Dynamic allocation (dynamic) o Integrate CREAM-BES? Security: ‘Static’ trust set up of security, proper VO set up? Middleware client ‘audit’ of interoperability? o Leads to ability to configure and use different middleware HPC-BP clients… Use of HARC for advance reservation Clean up the code, upload to OGF Forge within GIN-CG Participation very much an open process – if you wish to donate an HPC-BP compliant endpoint, please let me know!

20 Web: Email: Verified/Increasing Interoperability Future Direction Interface: o Workflow engine integration To replace/provide alternative to the Perl script Taverna2 good candidate o Application abstraction Use of endpoints: o Utilise production-level deployments o Utilise production-level security Abstraction level Utilise production-level deployments Now Future

21 Web: Email: Dissemination Thanks to the OMII-UK publicity machine: o HPCWire: Interoperability-Goes-Global-79343767.html Interoperability-Goes-Global-79343767.html o SuperComputing Online: goes-global goes-global o EGEE:[tt_news]=125&tx_ttnews[backP id]=65&cHash=90bb3f97cc[tt_news]=125&tx_ttnews[backP id]=65&cHash=90bb3f97cc o o o emagazine/news emagazine/news o + numerous OMII-UK website articles & UK NGS articles Just type ‘European Interoperability Goes Global’ into Google…

22 Web: Email: GridSAM OMII-UK London e-Science Centre, Imperial College, London Institute of Computing Technology, Chinese Academy of Sciences (Beijing)

23 Web: Email: GridSAM Overview What is GridSAM to the resource owners? o A web service to uniformly expose a computational resource Condor (via local or SSH submission) Portable Batch Scheduler (PBS) (via local or SSH submission) Globus Sun GridEngine Platform Load Sharing Facility (LSF) Single machine through Fork or SSH o Acts as a client to these resources What is GridSAM to end-users? o A means to access computational resources in an open standards-based uniform way o A set of end-user command-line tools and client-side APIs to interact with GridSAM Web Services Submit and monitor compute jobs Cross-protocol file transfer (gsiftp, ftp, sftp, WebDav, http, https, soon SRB, iRODS) via Commons-VFS (

24 Web: Email: Supported OGF Standards OGSA Basic Execution Service (BES) v1.0 JSDL v1.0 HPC Basic Profile v1.0 HPC Profile Application Extension v1.0 HPC File Staging Profile v1.0 HPC Common Case Profile: Activity Credential v0.1 JSDL SPMD Application Extension v1.0

25 Web: Email: GridSAM – Publications & Enabled Activities + in 2009/2010 – ICHEC Bioinformatics Portal, eSysBio, NAREGI/RENKEI

26 Web: Email: For Resource Owners… Computational Resource Manager Computational Resource Manager DRM … One of: PBS (Torque/OpenPBS/PBSPro) LSF, Condor, Sun GridEngine, Globus, Fork GridSAM Service GridSAM Service X509 certificate Linux Many flavours: RHEL 3,4,5, Fedora 7,8, Scientific Linux 4 Java: JDK 1.5.0+ Linux + Java Tomcat/ Axis Tomcat: 5.0.23, 5.0.28, 5.5.23 Axis: v1.2.1 Persistence provided by one of: Hypersonic, PostgreSQL, or existing MySQL

27 Web: Email: For End-Users… GridSAM Service GridSAM Service JSDL HTTPS/HTTP WS-Security: X509 User/Password Service Interface Any/all of: GridSAM native interface, OGSA- BES v1.0, HPC Basic Profile v1.0 … MyProxy (for Globus/ GridFTP) Globus-style Proxy Certificate + MyProxy credentials Windows/ Linux + Java Windows/ Linux + Java GridSAM Client Many flavours: RHEL 3,4,5, Fedora 7,8, Debian, Ubuntu, Scientific Linux 4, Windows XP, Windows Vista Java: JDK 1.5.0+ Axis X509 certificate Generic BES/HPC Basic Profile Client OSGA-BES HPC Basic Profile

28 Web: Email: Open Community Development GridSAM is Open Source, Open Community Development GridSAM SourceForge project: o 99.03% activity, 1 release/month o SVN source code repository o Developer & discuss mailing lists /projects/gridsam/

29 Web: Email: GridSAM e.g. with Condor A staged event-driven architecture o Submission pipeline is constructed as a network of stages connected by event queues o Each stage performs a specific action upon incoming events Example Pipeline: Condor

30 Web: Email: Planned Future Developments For end-users: o Full support for JSDL Resource selection across PBS, Globus, Condor & Fork DRMs o JSDL Parameter Sweep Extension o Support for SRB and iRODS For resource owners: o LCAS/LCMAPS support o Packaging option as a standalone, manually configurable web archive (WAR) file Direct PBS deployment throughout NGS sites

31 Web: Email: The tutorial begins… all you need is to go to:

Download ppt "Web: Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software - HPC-BP."

Similar presentations

Ads by Google