FermiCloud Project: Integration, Development, Futures Gabriele Garzoglio Associate Head Grid & Cloud Computing Department Fermilab Work supported by the.

Slides:



Advertisements
Similar presentations
Jun 29, 20111/13 Investigation of storage options for scientific computing on Grid and Cloud facilities Jun 29, 2011 Gabriele Garzoglio & Ted Hesselroth.
Advertisements

Fermilab, the Grid & Cloud Computing Department and the KISTI Collaboration GSDC - KISTI Workshop Jangsan-ri, Nov 4, 2011 Gabriele Garzoglio Grid & Cloud.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Minerva Infrastructure Meeting – October 04, 2011.
Jun 29, 20101/25 Storage Evaluation on FG, FC, and GPCF Jun 29, 2010 Gabriele Garzoglio Computing Division, Fermilab Overview Introduction Lustre Evaluation:
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Investigation of Storage Systems for use in Grid Applications 1/20 Investigation of Storage Systems for use in Grid Applications ISGC 2012 Feb 27, 2012.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Mar 24, 20111/17 Investigation of storage options for scientific computing on Grid and Cloud facilities Mar 24, 2011 Keith Chadwick for Gabriele Garzoglio.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
1 Configurable Security for Scavenged Storage Systems NetSysLab The University of British Columbia Abdullah Gharaibeh with: Samer Al-Kiswany, Matei Ripeanu.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Grid Developers’ use of FermiCloud (to be integrated with master slides)
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Apr 30, 20081/11 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Apr 30, 2008 Gabriele Garzoglio.
Investigation of Storage Systems for use in Grid Applications 1/20 Investigation of Storage Systems for use in Grid Applications ISGC 2012 Feb 27, 2012.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
VO Privilege Activity. The VO Privilege Project develops and implements fine-grained authorization to grid- enabled resources and services Started Spring.
FermiCloud: Enabling Scientific Workflows with Federation and Interoperability Steven C. Timm FermiCloud Project Lead Grid & Cloud Computing Department.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Accounting Update John Gordon and Stuart Pullinger January 2014 GDB.
BlueArc IOZone Root Benchmark How well do VM clients perform vs. Bare Metal clients? Bare Metal Reads are (~10%) faster than VM Reads. Bare Metal Writes.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Experiments in Utility Computing: Hadoop and Condor Sameer Paranjpye Y! Web Search.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
FermiCloud Project Overview and Progress Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Auxiliary services Web page Secrets repository RSV Nagios Monitoring Ganglia NIS server Syslog Forward FermiCloud: A private cloud to support Fermilab.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Enabling Scientific Workflows on FermiCloud using OpenNebula Steven Timm Grid & Cloud Services Department Fermilab Work supported by the U.S. Department.
Authentication, Authorization, and Contextualization in FermiCloud S. Timm, D. Yocum, F. Lowe, K. Chadwick, G. Garzoglio, D. Strain, D. Dykstra, T. Hesselroth.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Enabling Scientific Workflows on FermiCloud using OpenNebula Steven Timm Grid & Cloud Services Department Fermilab Work supported by the U.S. Department.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Hao Wu, Shangping Ren, Gabriele Garzoglio, Steven Timm, Gerard Bernabeu, Hyun Woo Kim, Keith Chadwick, Seo-Young Noh A Reference Model for Virtual Machine.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The StratusLab Distribution and Its Evolution 4ème Journée Cloud (Bordeaux, France) 30 November 2012.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
FermiGrid The Fermilab Campus Grid 28-Oct-2010 Keith Chadwick Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
July 18, 2011S. Timm FermiCloud Enabling Scientific Computing with Integrated Private Cloud Infrastructures Steven Timm.
PaaS services for Computing and Storage
StoRM: a SRM solution for disk based storage systems
StratusLab Final Periodic Review
StratusLab Final Periodic Review
f f FermiGrid – Site AuthoriZation (SAZ) Service
Статус ГРИД-кластера ИЯФ СО РАН.
Virtualization in the gLite Grid Middleware software process
Investigation of Storage Systems for use in Grid Applications
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Overview Context Test Bed Lustre Evaluation Standard benchmarks
Presentation transcript:

FermiCloud Project: Integration, Development, Futures Gabriele Garzoglio Associate Head Grid & Cloud Computing Department Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359

FermiCloud Development Goals Goal: Make virtual machine-based workflows practical for scientific users: Cloud bursting: Send virtual machines from private cloud to commercial cloud if needed Grid bursting: Expand grid clusters to the cloud based on demand for batch jobs in the queue. Federation: Let a set of users operate between different clouds Portability: How to get virtual machines from desktop  FermiCloud  commercial cloud and back. Fabric Studies: enable access to hardware capabilities via virtualization (100G, Infiniband, …) 5-Feb-2013FermiCloud Review - Development & Futures1

Overlapping Phases 5-Feb-2013FermiCloud Review - Development & Futures2 Phase 1: “Build and Deploy the Infrastructure” Phase 1: “Build and Deploy the Infrastructure” Phase 2: “Deploy Management Services, Extend the Infrastructure and Research Capabilities” Phase 2: “Deploy Management Services, Extend the Infrastructure and Research Capabilities” Phase 3: “Establish Production Services and Evolve System Capabilities in Response to User Needs & Requests” Phase 3: “Establish Production Services and Evolve System Capabilities in Response to User Needs & Requests” Phase 4: “Expand the service capabilities to serve more of our user communities” Phase 4: “Expand the service capabilities to serve more of our user communities” Time Today

FermiCloud Phase 4: “Expand the service capabilities to serve more of our user communities” Complete the deployment of the true multi-user filesystem on top of a distributed & replicated SAN, Demonstrate interoperability and federation: -Accepting VM's as batch jobs, -Interoperation with other Fermilab virtualization infrastructures (GPCF, VMware), -Interoperation with KISTI cloud, Nimbus, Amazon EC2, other community and commercial clouds, Participate in Fermilab 100 Gb/s network testbed, -Have just taken delivery of 10 Gbit/second cards. Perform more “Virtualized MPI” benchmarks and run some real world scientific MPI codes, -The priority of this work will depend on finding a scientific stakeholder that is interested in this capability. Reevaluate available open source Cloud computing stacks, -Including OpenStack, -We will also reevaluate the latest versions of Eucalyptus, Nimbus and OpenNebula. 5-Feb-2013FermiCloud Review - Development & Futures3 Specifications Being Finalized Specifications Being Finalized

FermiCloud Phase 5 See the program of work in the (draft) Fermilab-KISTI FermiCloud CRADA, This phase will also incorporate any work or changes that arise out of the Scientific Computing Division strategy on Clouds and Virtualization, Additional requests and specifications are also being gathered! 5-Feb-2013FermiCloud Review - Development & Futures4 Specifications Under Development Specifications Under Development

Proposed KISTI Joint Project (CRADA) Partners: Grid and Cloud Computing Global Science Experimental Data hub Project Title: Integration and Commissioning of a Prototype Federated Cloud for Scientific Workflows Status: Finalizing CRADA for DOE and KISTI approval, Work intertwined with all goals of Phase 4 (and potentially future phases), Three major work items: 1.Virtual Infrastructure Automation and Provisioning, 2.Interoperability and Federation of Cloud Resources, 3.High-Throughput Fabric Virtualization. Note that this is potentially a multi-year program: The work during FY13 is a “proof of principle”. 5-Feb-2013FermiCloud Review - Development & Futures5

Virtual Infrastructure Automation and Provisioning [CRADA Work Item 1] Accepting Virtual Machines as Batch Jobs via cloud API’s, Grid and Cloud Bursting, Launch pre-defined user virtual machines based on workload demand, Completion of idle machine detection and resource reclamation, Detect when VM is idle – DONE Sep ’12, Convey VM status and apply policy to affect the state of the Cloud – TODO. 5-Feb-2013FermiCloud Review - Development & Futures6

Virtual Machines as Jobs OpenNebula (and all other open-source IaaS stacks) provide an emulation of Amazon EC2. Condor team has added code to their “Amazon EC2” universe to support the X.509-authenticated protocol. Planned use case for GlideinWMS to run Monte Carlo on clouds public and private. Feature already exists, this is a testing/integration task only. 5-Feb-2013FermiCloud Review - Development & Futures7

Grid Bursting Seo-Young Noh, KISTI FNAL, showed proof-of-principle of “vCluster” in summer 2011: Look ahead at Condor batch queue, Submit worker node virtual machines of various VO’s to FermiCloud or Amazon EC2 based on user demand, Machines join grid cluster and run grid jobs from the matching VO. Need to strengthen proof-of-principle, then make cloud slots available to FermiGrid. Several other institutions have expressed interest in extending vCluster to other batch systems such as Grid Engine. KISTI staff has a program of work for the development of vCluster. 5-Feb-2013FermiCloud Review - Development & Futures8

9 vCluster at SC Feb-2013FermiCloud Review - Development & Futures

Cloud Bursting OpenNebula already has built-in “Cloud Bursting” feature to send machines to Amazon EC2 if the OpenNebula private cloud is full. Need to evaluate/test it, see if it meets our technical and business requirements, or if something else is necessary. Need to test interoperability against other stacks. 5-Feb-2013FermiCloud Review - Development & Futures10

True Idle VM Detection In times of resource need, we want the ability to suspend or “shelve” idle VMs in order to free up resources for higher priority usage. This is especially important in the event of constrained resources (e.g. during building or network failure). Shelving of “9x5” and “opportunistic” VMs allows us to use FermiCloud resources for Grid worker node VMs during nights and weekends This is part of the draft economic model. Giovanni Franzini (an Italian co-op student) has written (extensible) code for an “Idle VM Probe” that can be used to detect idle virtual machines based on CPU, disk I/O and network I/O. This is the biggest pure coding task left in the FermiCloud project, If KISTI joint project approved—good candidate for 3-month consultant. 5-Feb-2013FermiCloud Review - Development & Futures11

Idle VM Information Flow 5-Feb-2013FermiCloud Review - Development & Futures12 VM Raw VM State DB Raw VM State DB Idle VM Collector Idle VM Collector Idle VM Logic Idle VM List Idle VM List VM Idle VM Trigger Idle VM Trigger Idle VM Shutdown Idle VM Management Process

Interoperability and Federation [ CRADA Work Item 2] Driver: Global scientific collaborations such as LHC experiments will have to interoperate across facilities with heterogeneous cloud infrastructure. European efforts: EGI Cloud Federation Task Force – several institutional clouds (OpenNebula, OpenStack, StratusLab). HelixNebula—Federation of commercial cloud providers Our goals: Show proof of principle—Federation including FermiCloud + KISTI “G Cloud” + one or more commercial cloud providers + other research institution community clouds if possible. Participate in existing federations if possible. Core Competency: FermiCloud project can contribute to these cloud federations given our expertise in X.509 Authentication and Authorization, and our long experience in grid federation 5-Feb-2013FermiCloud Review - Development & Futures13

Virtual Image Formats Different clouds have different virtual machine image formats: File system ++, Partition table, LVM volumes, Kernel? Have to identify the differences and find tools to convert between them if necessary This is an integration/testing issue not expected to involve new coding. 5-Feb-2013FermiCloud Review - Development & Futures14

Interoperability/Compatibility of API’s Amazon EC2 API is not open source, it is a moving target that changes frequently. Open-source emulations have various feature levels and accuracy of implementation: Compare and contrast OpenNebula, OpenStack, and commercial clouds, Identify lowest common denominator(s) that work on all. 5-Feb-2013FermiCloud Review - Development & Futures15

VM Image Distribution Investigate existing image marketplaces (HEPiX, U. of Victoria). Investigate if we need an Amazon S3-like storage/distribution method for OS images, OpenNebula doesn’t have one at present, A GridFTP “door” to the OpenNebula VM library is a possibility, this could be integrated with an automatic security scan workflow using the existing Fermilab NESSUS infrastructure. 5-Feb-2013FermiCloud Review - Development & Futures16

High-Throughput Fabric Virtualization [CRADA Work Item 3] Follow up earlier virtualized MPI work: Use it in real scientific workflows Example – simulation of data acquisition systems (the existing FermiCloud Infiniband fabric has already been used for such). Will also use FermiCloud machines on 100Gbit Ethernet test bed Evaluate / optimize virtualization of 10G NIC for the use case of HEP data management applications Compare and contrast against Infiniband 5-Feb-2013FermiCloud Review - Development & Futures17

Long Term Vision [another look at some work in the FermiCloud Project Phases] 5-Feb-2013FermiCloud Review - Development & Futures18 FY10-12 Batch system support (CDF) at KISTI Proof-of-principle “Grid Bursting” via vCluster on FermiCloud Demonstration of virtualized InfiniBand FY10-12 Batch system support (CDF) at KISTI Proof-of-principle “Grid Bursting” via vCluster on FermiCloud Demonstration of virtualized InfiniBand FY13 Demonstration of VM image transfer across Clouds Proof-of-principle launch of VM as a job Investigation of options for federation interoperations FY13 Demonstration of VM image transfer across Clouds Proof-of-principle launch of VM as a job Investigation of options for federation interoperations FY14 Integration of vCluster for dynamic resource provisioning Infrastructure development for federation interoperations Broadening of supported fabric utilization FY14 Integration of vCluster for dynamic resource provisioning Infrastructure development for federation interoperations Broadening of supported fabric utilization FY15 Transition of Federated infrastructure to production Full support of established workflow fabric utilization Fully automated provisioning of resources for scientific workflows FY15 Transition of Federated infrastructure to production Full support of established workflow fabric utilization Fully automated provisioning of resources for scientific workflows Time Today

High-level Architecture 5-Feb-2013FermiCloud Review - Development & Futures19 Job / VM queue Img Repository 100 Gbps VM Virt. InfiniBand Cloud API 1 VM Idle Detection Resource Reclaim Cloud API 2 Cloud Federation Federated Authorization Layer vCluster (FY14) Workflow Submit Provision

OpenNebula Authentication OpenNebula came with “pluggable” authentication, but few plugins initially available. OpenNebula 2.0 Web services by default used “access key” / “secret key” mechanism similar to Amazon EC2. No https available. Four ways to access OpenNebula: Command line tools, Sunstone Web GUI, “ECONE” web service emulation of Amazon Restful (Query) API, OCCI web service. FermiCloud project wrote X.509-based authentication plugins: Patches to OpenNebula to support this were developed at Fermilab and submitted back to the OpenNebula project in Fall 2011 (generally available in OpenNebula V3.2 onwards). X.509 plugins available for command line and for web services authentication. 5-Feb-2013FermiCloud Review - Development & Futures20

Grid AuthZ Interoperability Profile Use XACML 2.0 to specify DN, CA, Hostname, CA, FQAN, FQAN signing entity, and more. Developed in 2007, (still) used in Open Science Grid and in EGEE. Currently in the final stage of OGF standardization Java and C bindings available for authorization clients Most commonly used C binding is LCMAPS Used to talk to GUMS, SAZ, and SCAS Allows one user to be part of different Virtual Organizations and have different groups and roles. For Cloud authorization we will configure GUMS to map back to individual user names, one per person Each personal account in OpenNebula created in advance. 5-Feb-2013FermiCloud Review - Development & Futures21

X.509 Authorization OpenNebula authorization plugins written in Ruby Use existing Grid routines to call to external GUMS and SAZ authorization servers Use Ruby-C binding to call C-based routines for LCMAPS or ***Use Ruby-Java bridge to call Java-based routines from Privilege proj. GUMS returns uid/gid, SAZ returns yes/no. Works with OpenNebula command line and non-interactive web services TRICKY PART—how to load user credential with extended attributes into a web browser? It can be done, but high degree of difficulty (PKCS12 conversion of VOMS proxy with all certificate chain included). We have a side project to make it easier/automated. Currently—have proof-of-principle running on our demo system, command line only. In frequent communication with LCMAPS NIKHEF and OpenNebula developers 5-Feb-2013FermiCloud Review - Development & Futures22

Virtualized Storage Service Investigation Motivation: General purpose systems from various vendors being used as file servers, Systems can have many more cores than needed to perform the file service, Cores go unused => Inefficient power, space and cooling usage, Custom configurations => Complicates sparing issues. Question: Can virtualization help here? What (if any) is the virtualization penalty? 5-Feb-2013FermiCloud Review - Development & Futures23

Virtualized Storage Server Test Procedure Evaluation: Use IOzone and real physics root based analysis code. Phase 1: Install candidate filesystem on “bare metal” server, Evaluate performance using combination of bare metal and virtualized clients (varying the number), Also run client processes on the “bare metal” server, Determine “bare metal” filesystem performance. Phase 2: Install candidate filesystem on a virtual machine server, Evaluate performance using combination of bare metal and virtualized clients (varying the number), Also use client virtual machines hosted on same physical machine as the virtual machine server, Determine virtual machine filesystem performance. 5-Feb-2013FermiCloud Review - Development & Futures24

FermiCloud Test Bed - “Bare Metal” Server 2 TB 6 Disks eth FCL : 3 Data & 1 Name node Dom0: - 8 CPU - 24 GB RAM Lustre Server Lustre on SL : 3 OSS (different striping) CPU: dual, quad core Xeon 2.67GHz with 12 MB cache, 24 GB RAM Disk: 6 SATA disks in RAID 5 for 2 TB + 2 sys disks ( hdparm  MB/sec ) 1 GB Eth + IB cards 5-Feb-2013FermiCloud Review - Development & Futures25 eth mount BlueArc mount ITB / FCL Ext. Clients (7 nodes - 21 VM) CPU: dual, quad core Xeon 2.66GHz with 4 MB cache; 16 GB RAM. 3 Xen VM SL5 per machine; 2 cores / 2 GB RAM each.

FermiCloud Test Bed - Virtualized Server 2 TB 6 Disks eth FCL : 3 Data & 1 Name node mount BlueArc mount Dom0: - 8 CPU - 24 GB RAM On Board Client VM 7 x Opt. Storage Server VM 8 KVM VM per machine; 1 cores / 2 GB RAM each. ( ) 5-Feb-2013FermiCloud Review - Development & Futures26 ITB / FCL Ext. Clients (7 nodes - 21 VM) CPU: dual, quad core Xeon 2.66GHz with 4 MB cache; 16 GB RAM. 3 Xen VM SL5 per machine; 2 cores / 2 GB RAM each.

Virtualized File Service Results Summary See ISGC ’11 talk and proceedings for the details - =44 FileSystemBenchmark Read (MB/s) “Bare Metal” Write (MB/s) VM Write (MB/s) Notes Lustre IOZone Significant write penalty when FS on VM Root-based Hadoop IOZone Varies on number of replicas, fuse does not export a full posix fs. Root-based7.9-- OrangeFS IOZone Varies on number of name nodes Root-based8.1-- BlueArc IOZone300330n/a Varies on system conditions Root-based Feb-2013FermiCloud Review - Development & Futures27

Summary Collaboration: Leveraging external work as much as possible, Contribution of our work back to external collaborations. Using (and if necessary extending) existing standards: AuthZ, OGF UR, Gratia, etc. Relatively small amount of development FTEs: Has placed FermiCloud in a potential leadership position! CRADA with KISTI, Could be the beginnings of a beautiful relationship… 5-Feb-2013FermiCloud Review - Development & Futures28

Thank You! Any Questions?

X.509 Authentication—how it works Command line: User creates a X.509-based token using “oneuser login” command This makes a base64 hash of the user’s proxy and certificate chain, combined with a username:expiration date, signed with the user’s private key Web Services: Web services daemon contacts OpenNebula XML-RPC core on the users’ behalf, using the host certificate to sign the authentication token. Use Apache mod_ssl or gLite’s GridSite to pass the grid certificate DN (and optionally FQAN) to web services. Limitations: With Web services, one DN can map to only one user. 5-Feb-2013FermiCloud Review - Development & Futures30

“Authorization” in OpenNebula Note: OpenNebula has pluggable “Authorization” modules as well. These control Access ACL’s—namely which user can launch a virtual machine, create a network, store an image, etc. Focus on privilege management rather than the typical grid-based notion of access authorization. Therefore, we make our “Authorization” additions to the Authentication routines of OpenNebula 5-Feb-2013FermiCloud Review - Development & Futures31