Future Grid Introduction March 8 2010 MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.

Slides:



Advertisements
Similar presentations
Overview of the FutureGrid Software
Advertisements

2010 FutureGrid User Advisory Meeting Architecture Roadmap Status Now – October –Next year 11:15-12:00, Monday, August 2, 2010 Pittsburgh, PA Gregor von.
FutureGrid related presentations at TG and OGF Sun. 17th: Introduction to FutireGrid (OGF) Mon. 18th: Introducing to FutureGrid (TG) Tue. 19th –Educational.
2010 FutureGrid User Advisory Meeting Architecture Roadmap Long term vision 10:00-10:45, Monday, August 2, 2010 Pittsburgh, PA Gregor von Laszewski Representing.
FutureGrid Overview NSF PI Science of Cloud Workshop Washington DC March Geoffrey Fox
Advanced Computing and Information Systems laboratory Educational Virtual Clusters for On- demand MPI/Hadoop/Condor in FutureGrid Renato Figueiredo Panoat.
FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images Javier Diaz, Gregor von Laszewski, Fugang Wang,
Student Visits August Geoffrey Fox
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
FutureGrid Summary TG’10 Pittsburgh BOF on New Compute Systems in the TeraGrid Pipeline August Geoffrey Fox
SC2010 Gregor von Laszewski (*) (*) Assistant Director of Cloud Computing, CGL, Pervasive Technology Institute.
FutureGrid Summary FutureGrid User Advisory Board TG’10 Pittsburgh August Geoffrey Fox
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
FutureGrid Overview David Hancock HPC Manger Indiana University.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Clouds and FutureGrid MSI-CIEC All Hands Meeting SDSC January Geoffrey Fox
FutureGrid Overview CTS Conference 2011 Philadelphia May Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
Science of Cloud Computing Panel Cloud2011 Washington DC July Geoffrey Fox
FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
PolarGrid Geoffrey Fox (PI) Indiana University Associate Dean for Graduate Studies and Research, School of Informatics and Computing, Indiana University.
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
Gregor von Laszewski*, Geoffrey C. Fox, Fugang Wang, Andrew Younge, Archit Kulshrestha, Greg Pike (IU), Warren Smith, (TACC) Jens Vöckler (ISI), Renato.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
FutureGrid Design and Implementation of a National Grid Test-Bed David Hancock – HPC Manager - Indiana University Hardware & Network.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
What’s Hot in Clouds? Analyze (superficially) the ~140 Papers/Short papers/Workshops/Posters/Demos in CloudCom Each paper may fall in more than one category.
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
FutureGrid SC10 New Orleans LA IU Booth November Geoffrey Fox
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
FutureGrid Overview Geoffrey Fox
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
FutureGrid Cyberinfrastructure for Computational Research.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Research in Grids and Clouds and FutureGrid Melbourne University September Geoffrey Fox
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
FutureGrid Overview Geoffrey Fox
Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
FutureGrid BOF Overview TG 11 Salt Lake City July Geoffrey Fox
FutureGrid NSF September Geoffrey Fox
Virtual Appliances CTS Conference 2011 Philadelphia May Geoffrey Fox
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
IEEE Cloud 2011 Panel: Opportunities of Services Business in Cloud Age Fundamental Research Challenges for Cloud-based Service Delivery Gregor von Laszewski.
Grappling Cloud Infrastructure Services with a Generic Image Repository Javier Diaz Andrew J. Younge, Gregor von Laszewski, Fugang.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
FutureGrid: a Grid Testbed
Sky Computing on FutureGrid and Grid’5000
Gregor von Laszewski Indiana University
Using and Building Infrastructure Clouds for Science
PolarGrid and FutureGrid
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University 1

FutureGrid Goals Support the research on the future of distributed, grid, and cloud computing. Build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. Mimic TeraGrid and/or general parallel and distributed systems – FutureGrid is part of TeraGrid and one of two experimental TeraGrid systems (the other is based on GPU) Enable major advances in science and engineering through collaborative development of science applications and related software using FG FutureGrid is a (small 5600 core) Science Cloud but it is more accurately a virtual machine based simulation environment 2

Add Network Fault Generator and other systems running FutureGrid Hardware 3

FutureGrid Hardware FutureGrid has dedicated network (except to TACC) and a network fault and delay generator Can isolate experiments on request; IU runs Network for NLR/Internet2 (Many) additional partner machines will run FutureGrid software and be supported (but allocated in specialized ways) 4

Storage Hardware System TypeCapacity (TB)File SystemSiteStatus DDN 9550 (Data Capacitor) 339LustreIUExisting System DDN GPFSUCNew System SunFire x417072Lustre/PVFSSDSCNew System Dell MD300030NFSTACCNew System 5

Logical Diagram 6

FutureGrid Partners Indiana University (Architecture, core software, Support) Purdue University (HTC Hardware) San Diego Supercomputer Center at University of California San Diego (INCA, Monitoring) University of Chicago/Argonne National Labs (Nimbus) University of Florida (ViNE, Education and Outreach) University of Southern California Information Sciences (Pegasus to manage experiments) University of Tennessee Knoxville (Benchmarking) University of Texas at Austin/Texas Advanced Computing Center (Portal) University of Virginia (OGF, Advisory Board and allocation) Center for Information Services and GWT-TUD from Technische Universtität Dresden. (VAMPIR) Blue institutions have FutureGrid hardware 7

Other Important Collaborators Other Important Collaborators NSF Early users from an application and computer science perspective and from both research and education Grid5000/Aladin and D-Grid in Europe Commercial partners such as – Eucalyptus …. – Microsoft (Dryad + Azure) – Note current Azure external to FutureGrid as are GPU systems – Application partners TeraGrid Open Grid Forum Open Nebula, Open Cirrus Testbed, Open Cloud Consortium, Cloud Computing Interoperability Forum. IBM-Google-NSF Cloud, UIUC Cloud 8

FutureGrid Usage Scenarios Developers of end-user applications who want to develop new applications in cloud or grid environments, including analogs of commercial cloud environments such as Amazon or Google. – Is a Science Cloud for me? Developers of end-user applications who want to experiment with multiple hardware environments. Grid/Cloud middleware developers who want to evaluate new versions of middleware or new systems. Networking researchers who want to test and compare different networking solutions in support of grid and cloud applications and middleware. (Some types of networking research will likely best be done via through the GENI program.) Education as well as research Interest in performance requires close to OS support 9

Future Grid Users Application/Scientific users System administrators Software developers Testbed users Performance modelers Educators Students Supported by FutureGrid Infrastructure & Software offerings 10

Management Structure 11

FutureGrid Working Groups Systems Administration and Network Management Committee: This committee will be responsible for all matters related to systems administration, network management, and security. David Hancock of IU will be the inaugural chair of this committee. Software Adaptation, Implementation, Hardening, and Maintenance Committee: This committee will be responsible for all aspects of software creation and management. It should interface with TAIS in TeraGrid. Gregor von Laszewski from IU will chair this committee. Performance Analysis Committee: This committee will be responsible for coordination of performance analysis activities. Shava Smallen of UCSD will be the inaugural chair of this committee. Training, Education, and Outreach Services Committee: This committee will coordinate Training, Education, and Outreach Service activities and will be chaired by Renato Figueiredo. User Support Committee: This committee will coordinate the management of online help information, telephone support, and advanced user support. Jonathan Bolte of IU will chair this committee. Operations and Change Management Committee (including CCB): This committee will be responsible for operational management of FutureGrid, and is the one committee that will always include at least one member from every participating institution, including those participating without funding. This is led by Craig Stewart 12

FutureGrid Architecture Open Architecture allows to configure resources based on images Managed images allows to create similar experiment environments Experiment management allows reproducible activities Through our modular design we allow different clouds and images to be “rained” upon hardware. Note will be supported 24x7 at “TeraGrid Production Quality” Will support deployment of “important” middleware including TeraGrid stack, Condor, BOINC, gLite, Unicore, Genesis II 13

FutureGrid Architecture 14

Development Phases Phase 0: Get Hardware to run Phase I: Get early users to use the system Phase II: Implement Dynamic Provisioning Phase III: Integrate with TeraGgrid 15

Objectives: Software Extensions to existing software Existing open-source software Open-source, integrated suite of software to – instantiate and execute grid and cloud experiments. – perform an experiment – collect the results – tools for instantiating a test environment, Torque, MOAB, xCAT, bcfg, and Pegasus, Inca, ViNE, a number of other tools from our partners and the open source community Portal to interact – Benchmarking 16

FG Stratosphere Objective – Higher than a particular cloud – Provides all mechanisms to provision a cloud on a given FG hardware – Allows the management of reproducible experiments – Allows monitoring of the environment and the results Risks – Lots of software – Possible multiple path to do the same thing Good news – We know about different solutions and have identified a very good plan with risk mitigation plans 17

Rain Runtime Adaptable InsertioN Service Objective – Provide dynamic provisioning – Running outside virtualization – Cloud neutral Nimbus, Eucalyptus, … – Future oriented Dryad … Risks – Some frameworks (e.g. MS) are more complex to provision 18

Dynamic Provisioning Change underlying system to support current user demands Linux, Windows, Xen, Nimbus, Eucalyptus, Hadoop, Dryad Switching between Linux and Windows possible! Stateless images Shorter boot times Easier to maintain Stateful installs Windows Use Moab to trigger changes and xCAT to manage installs 19

Dynamic Provisioning 20

Dynamic provisioning Examples Give me a virtual cluster with 30 nodes Give me a Eucalyptus environment with 10 nodes Give me a hadoop environment with x nodes Run my application on hadoop, dryad, … and compare the performance 21

Command line fg-deploy-image – host name – image name – start time – end time – label name fg-add – label name – framework hadoop – version 1.0 Deploys an image on a host Adds a feature to a deployed image 22

xCAT and Moab xCAT uses installation infrastructure to perform installs creates stateless Linux images changes the boot configuration of the nodes remote power control and console (IPMI) Moab meta-schedules over resource managers  TORQUE and Windows HPC control nodes through xCAT  changing the OS  remote power control 23

Experiment Manager Objective – Manage the provisioning for reproducible experiments – Coordinate workflow of experiments – Share workflow and experiment images – Minimize space through reuse Risk – Images are large – Users have different requirements and need different images 24

User Portal 25

FG Information Portal 26

FG Administration Portal 27

Integration within TeraGrid / TeraGrid XD Sure it’s part of TeraGrid Allocation: separate from TG processes for two years It is a very exciting project, it will teak effort – TG may change, good that we can wait We are looking for early adopters! 28

MilestonesMilestones 29 Oct Project Start Nov SC Demo Mar Network Completed May 2010 Hardware available to early users Sept Hardware available to general users Oct.2011 Integration into TeraGrid Oct Project end

QuestionsQuestions Please contact Gregor via mail – We will update our web page soon –