Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.

Slides:



Advertisements
Similar presentations
Virtualization, Cloud Computing, and TeraGrid Kate Keahey (University of Chicago, ANL) Marlon Pierce (Indiana University)
Advertisements

Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Education and training on FutureGrig Salt Lake City, Utah July 18 th 2011 Presented by Renato Figueiredo
FutureGrid related presentations at TG and OGF Sun. 17th: Introduction to FutireGrid (OGF) Mon. 18th: Introducing to FutureGrid (TG) Tue. 19th –Educational.
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
Advanced Computing and Information Systems laboratory Educational Virtual Clusters for On- demand MPI/Hadoop/Condor in FutureGrid Renato Figueiredo Panoat.
GPUs on Clouds Andrew J. Younge Indiana University (USC / Information Sciences Institute) UNCLASSIFIED: 08/03/2012.
 Amazon Web Services announced the launch of Cluster Compute Instances for Amazon EC2.  Which aims to provide high-bandwidth, low- latency instances.
Evaluating GPU Passthrough in Xen for High Performance Cloud Computing Andrew J. Younge 1, John Paul Walters 2, Stephen P. Crago 2, and Geoffrey C. Fox.
FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images Javier Diaz, Gregor von Laszewski, Fugang Wang,
Simo Niskala Teemu Pasanen
Jefferson Ridgeway 2, Ifeanyi Rowland Onyenweaku 3, Gregor von Laszewski 1*, Fugang Wang 1 1* Indiana University, Bloomington, IN 47408, U.S.A.,
Presented by Sujit Tilak. Evolution of Client/Server Architecture Clients & Server on different computer systems Local Area Network for Server and Client.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Internet GIS. A vast network connecting computers throughout the world Computers on the Internet are physically connected Computers on the Internet use.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over the Internet. Cloud is the metaphor for.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Cloud Models – Iaas, Paas, SaaS, Chapter- 7 Introduction of cloud computing.
Introduction to VMware Virtualization
Cloud Computing. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
Gregor von Laszewski*, Geoffrey C. Fox, Fugang Wang, Andrew Younge, Archit Kulshrestha, Greg Pike (IU), Warren Smith, (TACC) Jens Vöckler (ISI), Renato.
Introduction to Cloud Computing
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
SUNY FARMINGDALE Computer Programming & Information Systems BCS451 – Cloud Computing Prof. Tolga Tohumcu.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
Image Management and Rain on FutureGrid: A practical Example Presented by Javier Diaz, Fugang Wang, Gregor von Laszewski.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
Image Generation and Management on FutureGrid CTS Conference 2011 Philadelphia May Geoffrey Fox
Image Management and Rain on FutureGrid Javier Diaz - Fugang Wang – Gregor von.
FutureGrid Cyberinfrastructure for Computational Research.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Virtual Workspaces Kate Keahey Argonne National Laboratory.
Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During.
INTRUSION DETECTION SYSYTEM. CONTENT Basically this presentation contains, What is TripWire? How does TripWire work? Where is TripWire used? Tripwire.
Portal Update Plan Ashok Adiga (512)
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Grid Appliance The World of Virtual Resource Sharing Group # 14 Dhairya Gala Priyank Shah.
Architecture & Cybersecurity – Module 3 ELO-100Identify the features of virtualization. (Figure 3) ELO-060Identify the different components of a cloud.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
Vignesh Ravindran Sankarbala Manoharan. Infrastructure As A Service (IAAS) is a model that is used to deliver a platform virtualization environment with.
Grappling Cloud Infrastructure Services with a Generic Image Repository Javier Diaz Andrew J. Younge, Gregor von Laszewski, Fugang.
Page 1 Cloud Computing JYOTI GARG CSE 3 RD YEAR UIET KUK.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
Introduction to VMware Virtualization
By: Raza Usmani SaaS, PaaS & TaaS By: Raza Usmani
Introduction to Cloud Computing
Tutorial Overview February 2017
Sky Computing on FutureGrid and Grid’5000
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew Grimshaw $, Software Lead: Gregor von Laszewski* * Indiana University, + University of Chicago, - Texas Applied Computing Center, # University of Florida, $ University of Virginia, San Diego Supercomputer Center, Information Sciences Institute, Purdue University, University of Tennessee, Technical University Dresden RAIN Service Architecture Dynamic provisioning embodies a key building block in the overall design of the FutureGrid project. Within FutureGrid, there is the concept of provisioning or "raining" both infrastructure and platforms on demand to commodity hardware based on user requirements for what software stack and operating environment suits them best. This concept of dynamic provisioning is a pivotal process in making the FG deployment unique and desirable to any set of scientific researchers requiring high performance computing today. In essence we are hosting both infrastructure and platforms within a service oriented architecture. We define raining infrastructure as the rapid deployment of Infrastructure as-a-Service (IaaS), which delivers a service that allows users to gain access to and fully manage a compute infrastructure suitable for their needs. Most common is the management of a virtualized set of images of Virtual machines. Such virtualization allows the datacenter to host a number of virtualized servers on the same hardware. Examples of IaaS, particularly within FG, are Nimbus and Eucalyptus clouds. However raining platforms with Platforms as a Service (PaaS), takes on the next level. It delivers services to the users integrating a computing platform and/or a solution stack to support the development of cloud applications. The platform therefore provides significant enhancements to the infrastructure building a cloud and reducing the cost and complexity associated with developing software on a simple cloud infrastructure. Examples of PaaS deployments that FG plans to support include Hadoop, Twister, and possibly other Message Queue systems as demand rises. In order to provide dynamic provisioning to users, not one, but multiple different tools will need to be integrated together to create a seamless experience for the user. As such, we have identified xCAT as the best fit for bare-metal level OS deployment. With xCAT, we can provision a wide array of Operating Systems on the available resources, thereby providing the environment a user wants with ease. While xCAT is remarkably well suited for OS deployment, an additional layer is needed to manage the provisioning of such resources and the scheduling of work to them. Adaptive Computing's MOAB Suite provides an elevated queue to accept tasks and control xCAT to provision the resources to effectively meet the needs of the queue. Using MOAB with xCAT and our own RAIN services could provide dynamic provisioning and adaption of resources within a particular site-wide deployment on the FutureGrid. With the creation and utilization of a wide variety of UNIX-based Operating Systems, a configuration management system is needed to keep everything working properly. This includes managing and updating installed software, adding security patches, maintaining configuration files and adding host keys and certificates on the fly to xCAT provisioned nodes and newly created virtual machines. This provides a seamless environment for both the users as well as the system administrators. With the vast array of virtual clusters and private clouds, a number of head nodes are required to manage each system. While these head nodes are not computationally intensive, they do need dedicated resources on their own. This includes head nodes for Nimbus clouds, Eucalyptus clouds, a PBS queue, and any other user-determined distributed systems. It is important to note that neither Moab, XCAT, BCFG2, or other tools which are often referred to by members of the project are able to provide the functionality needed for FG alone. In an implementation view they provide portions of the functionality and we will see how these tools can assist building the Architecture of FG. Together these tools comprise the building blocks for our RAIN service. The dynamic provisioning software architecture was deployed onto a FutureGrid test platform called Gravel as well as the production-level Sierra and India clusters. On Gravel the dynamic provisioning scenario was tested using VirtualBox virtual machines. The xCAT VirtualBox plugin was used to manage the power attributes of the VMs and the Moab Service Manager’s xCAT plugin was modified to add support for VirtualBox VMs and emulated real FutureGrid infrastructure providing an ideal development platform. The system was tested with various RHEL5, CentOS5 and Fedora images using stateful and stateless installs of each to obtain preliminary performance results. In a stateless setup the time taken to have a node provisioned and ready to accept jobs is affected by the time it takes to transfer the root image over the network in addition to the boot up time of the node with the image. When similar images were deployed using stateless and stateful modes we found no documentable difference between the boot times and the results varied between tests. The size of the image is a large part of the boot times in both cases and we plan to run further tests with smaller satellite images where the core image is small and most of the tools and software are mounted read only onto the image and study the best mode of deployment. Introduction Implementation The FutureGrid is an NSF-funded project which provides an experimental platform that accommodates batch, grid and cloud computing, allowing researchers to attack a range of research questions associated with optimizing, integrating and scheduling the different service models. FutureGrid will provide a significant new experimental computing grid and cloud computing test-bed to the research community, together with user support for third-party researchers conducting experiments on FutureGrid. The test-bed includes a geographically distributed set of heterogeneous computing systems, a data management system that will hold both metadata and a growing library of software images, and a dedicated network allowing isolatable, secure experiments. Acknowledgment This poster was created by the FutureGrid team. We would like to specially thank Andrew J. Younge, Archit Kulshrestha, Fugang Wang, Joe Rinkovsky, and Gregory Pike for contributing content. This poster as also developed with support from the National Science Foundation (NSF) under Grant No to Indiana University for "FutureGrid: An Experimental, High-Performance Grid Test-bed." Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Sponsored by: Dynamic Provisioning This work presents a novel method for dynamically provisioning Cloud services onto HPC resources provided through the FutureGrid project. The dynamic provisioning process enables scientific researchers to build advanced platforms and services using FutureGrid infrastructure to leverage the power of HPC in a way that was previously impossible with traditional Grid infrastructure. The Runtime Adaptable INsertion (RAIN) service allows for researchers to leverage Cloud infrastructure to deploy their own platform and operating environment with ease, a task which was previously impossible with any HPC system available. Research has shown entire resource re-provisioning can be accomplished in a very short time, which warrants the use of RAIN due to its added benefits and ease of use to researchers. This advancement may finally lower the entry barrier into HPC for a large number of scientists who's requirements were too large and means too small take advantage of HPC resources. Process View About FutureGrid