FutureGrid NSF September 15 2010 Geoffrey Fox

Slides:



Advertisements
Similar presentations
Experiences with the FutureGrid Testbed UC Cloud Summit UCLA April 19, 2011 Shava Smallen
Advertisements

FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
FutureGrid Overview NSF PI Science of Cloud Workshop Washington DC March Geoffrey Fox
Cosmic Issues and Analysis of External Comments on FutureGrid TG11 Salt Lake City July Geoffrey Fox
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Overview Presented at OGF31 Salt Lake City, July 2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo Contact:
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
FutureGrid Summary TG’10 Pittsburgh BOF on New Compute Systems in the TeraGrid Pipeline August Geoffrey Fox
Panel Session The Challenges at the Interface of Life Sciences and Cyberinfrastructure and how should we tackle them? Chris Johnson, Geoffrey Fox, Shantenu.
FutureGrid Overview Bloomington Indiana January FutureGrid Collaboration Presented by Geoffrey Fox
SC2010 Gregor von Laszewski (*) (*) Assistant Director of Cloud Computing, CGL, Pervasive Technology Institute.
FutureGrid Summary FutureGrid User Advisory Board TG’10 Pittsburgh August Geoffrey Fox
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
FutureGrid Overview David Hancock HPC Manger Indiana University.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Clouds and FutureGrid MSI-CIEC All Hands Meeting SDSC January Geoffrey Fox
FutureGrid Overview CTS Conference 2011 Philadelphia May Geoffrey Fox
Cloud Data mining and FutureGrid SC10 New Orleans LA AIST Booth November Geoffrey Fox
Raining Compute Environments on Resources by Application Users Gregor von Laszewski Indiana University Open Cirrus Summit 2011, Oct.
Overview of Cyberinfrastructure Northeastern Illinois University Cyberinfrastructure Day August Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
Science of Cloud Computing Panel Cloud2011 Washington DC July Geoffrey Fox
FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Cloud Architecture for Earthquake Science 7 th ACES International Workshop 6th October 2010 Grand Park Otaru Otaru Japan Geoffrey Fox
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
Gregor von Laszewski*, Geoffrey C. Fox, Fugang Wang, Andrew Younge, Archit Kulshrestha, Greg Pike (IU), Warren Smith, (TACC) Jens Vöckler (ISI), Renato.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
FutureGrid 100 and 101 (part one) Virtual School for Computational Science and Engineering July Geoffrey Fox
FutureGrid Design and Implementation of a National Grid Test-Bed David Hancock – HPC Manager - Indiana University Hardware & Network.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
What’s Hot in Clouds? Analyze (superficially) the ~140 Papers/Short papers/Workshops/Posters/Demos in CloudCom Each paper may fall in more than one category.
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
FutureGrid Overview Geoffrey Fox
FutureGrid SC10 New Orleans LA IU Booth November Geoffrey Fox
FutureGrid Overview Geoffrey Fox
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
FutureGrid Cyberinfrastructure for Computational Research.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Research in Grids and Clouds and FutureGrid Melbourne University September Geoffrey Fox
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
FutureGrid Overview Geoffrey Fox
Tutorial Presented at TG2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo, Kate Keahey, Andrew Younge Contact:
FutureGrid BOF Overview TG 11 Salt Lake City July Geoffrey Fox
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
Private Public FG Network NID: Network Impairment Device
Digital Science Center Overview
FutureGrid: a Grid Testbed
Sky Computing on FutureGrid and Grid’5000
Clouds from FutureGrid’s Perspective
Gregor von Laszewski Indiana University
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

FutureGrid NSF September Geoffrey Fox Director, Digital Science Center, Pervasive Technology Institute Associate Dean for Research and Graduate Studies, School of Informatics and Computing Indiana University Bloomington

FutureGrid key Concepts I FutureGrid provides a testbed with a wide variety of computing services to its users – Supporting users developing new applications and new middleware using Cloud, Grid and Parallel computing (Hypervisors – Xen, KVM, ScaleMP, Linux, Windows, Nimbus, Eucalyptus, Hadoop, Globus, Unicore, MPI, OpenMP …) – Software supported by FutureGrid or users – ~5000 dedicated cores distributed across country The FutureGrid testbed provides to its users: – A rich development and testing platform for middleware and application users looking at interoperability, functionality and performance – Each use of FutureGrid is an experiment that is reproducible – A rich education and teaching platform for advanced cyberinfrastructure classes – Ability to collaborate with the US industry on research projects

FutureGrid key Concepts II Cloud infrastructure supports loading of general images on Hypervisors like Xen; FutureGrid dynamically provisions software as needed onto “bare-metal” using Moab/xCAT based environment Key early user oriented milestones: – June 2010 Initial users – November 2010-September 2011 Increasing number of users allocated by FutureGrid – October 2011 FutureGrid allocatable via TeraGrid process To apply for FutureGrid access or get help, go to homepage Alternatively for help send to You should receive an automated reply to within minutes, and contact clearly from a live human no later than next (U.S.) business day after sending an message. Please send to PI if problems

FutureGrid Partners Indiana University (Architecture, core software, Support) – Collaboration between research and infrastructure groups Purdue University (HTC Hardware) San Diego Supercomputer Center at University of California San Diego (INCA, Monitoring) University of Chicago/Argonne National Labs (Nimbus) University of Florida (ViNE, Education and Outreach) University of Southern California Information Sciences (Pegasus to manage experiments) University of Tennessee Knoxville (Benchmarking) University of Texas at Austin/Texas Advanced Computing Center (Portal) University of Virginia (OGF, Advisory Board and allocation) Center for Information Services and GWT-TUD from Technische Universtität Dresden. (VAMPIR) Red institutions have FutureGrid hardware

FutureGrid Organization 5 PI Advisory Committee Executive Committee PI and co-PI’s Executive Committee PI and co-PI’s Project Manager Software Architect Computers and Network Computers and Network Software User Support Operations and Change Management Committee Performance Core Images/ Appliances Images/ Appliances Portal Web Site Portal Web Site Training Education Outreach Training Education Outreach Advanced User Support Advanced User Support Systems Management Basic Support

Compute Hardware System type# CPUs# CoresTFLOPS Total RAM (GB) Secondary Storage (TB) Site Status Dynamically configurable systems IBM iDataPlex *IU Operational Dell PowerEdge TACC Being installed IBM iDataPlex UC Operational IBM iDataPlex SDSC Operational Subtotal Systems not dynamically configurable Cray XT5m *IU Operational Shared memory system TBD *IU New System TBD IBM iDataPlex UF Operational High Throughput Cluster PU Not yet integrated Subtotal Total

Storage Hardware System TypeCapacity (TB)File SystemSiteStatus DDN 9550 (Data Capacitor) 339LustreIUExisting System DDN GPFSUCNew System SunFire x417096ZFSSDSCNew System Dell MD300030NFSTACCNew System

Network & Internal Interconnects FutureGrid has dedicated network (except to TACC) and a network fault and delay generator Can isolate experiments on request; IU runs Network for NLR/Internet2 (Many) additional partner machines will run FutureGrid software and be supported (but allocated in specialized ways) MachineNameInternal Network IU CrayxrayCray 2D Torus SeaStar IU iDataPlexindiaDDR IB, QLogic switch with Mellanox ConnectX adapters Blade Network Technologies & Force10 Ethernet switches SDSC iDataPlex sierraDDR IB, Cisco switch with Mellanox ConnectX adapters Juniper Ethernet switches UC iDataPlexhotelDDR IB, QLogic switch with Mellanox ConnectX adapters Blade Network Technologies & Juniper switches UF iDataPlexfoxtrotGigabit Ethernet only (Blade Network Technologies; Force10 switches) TACC DellalamoQDR IB, Mellanox switches and adapters Dell Ethernet switches

FutureGrid: a Grid/Cloud/HPC Testbed Operational: IU Cray operational; IU, UCSD, UF & UC IBM iDataPlex operational Network, NID operational TACC Dell finished acceptance tests NID : Network Impairment Device Private Public FG Network INCA Node Operating Mode Statistics

Network Impairment Device Spirent XGEM Network Impairments Simulator for jitter, errors, delay, etc Full Bidirectional 10G w/64 byte packets up to 15 seconds introduced delay (in 16ns increments) 0-100% introduced packet loss in.0001% increments Packet manipulation in first 2000 bytes up to 16k frame size TCL for scripting, HTML for manual configuration Need more proposals to use (have one from University of Delaware)

FutureGrid Usage Model The goal of FutureGrid is to support the research on the future of distributed, grid, and cloud computing FutureGrid will build a robustly managed simulation environment and test-bed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications The environment will mimic TeraGrid and/or general parallel and distributed systems – FutureGrid is part of TeraGrid (but not part of formal TeraGrid process for first two years) – Supports Grids, Clouds, and classic HPC – It will mimic commercial clouds (initially IaaS not PaaS) – Expect FutureGrid PaaS to grow in importance FutureGrid can be considered as a (small ~5000 core) Science/Computer Science Cloud but it is more accurately a virtual machine or bare-metal based simulation environment This test-bed will succeed if it enables major advances in science and engineering through collaborative development of science applications and related software

Some Current FutureGrid early uses Investigate metascheduling approaches on Cray and iDataPlex Deploy Genesis II and Unicore end points on Cray and iDataPlex clusters Develop new Nimbus cloud capabilities Prototype applications (BLAST) across multiple FutureGrid clusters and Grid’5000 Compare Amazon, Azure with FutureGrid hardware running Linux, Linux on Xen or Windows for data intensive applications Test ScaleMP software shared memory for genome assembly Develop Genetic algorithms on Hadoop for optimization Attach power monitoring equipment to iDataPlex nodes to study power use versus use characteristics Industry (Columbus IN) running CFD codes to study combustion strategies to maximize energy efficiency Support evaluation needed by XD TIS and TAS services Investigate performance of Kepler workflow engine Study scalability of SAGA in difference latency scenarios Test and evaluate new algorithms for phylogenetics/systematics research in CIPRES portal Investigate performance overheads of clouds in parallel and distributed environments Support tutorials and classes in cloud, grid and parallel computing (IU, Florida, LSU) ~12 active/finished users out of ~32 early user applicants Two Recent Projects

Grid Interoperability from Andrew Grimshaw Colleagues, FutureGrid has as two of its many goals the creation of a Grid middleware testing and interoperability testbed as well as the maintenance of standards compliant endpoints against which experiments can be executed. We at the University of Virginia are tasked with bringing up three stacks as well as maintaining standard- endpoints against which these experiments can be run. We currently have UNICORE 6 and Genesis II endpoints functioning on X-Ray (a Cray). Over the next few weeks we expect to bring two additional resources, India and Sierra (essentially Linux clusters), on-line in a similar manner (Genesis II is already up on Sierra). As called for in the FutureGrid program execution plan, once those two stacks are operational we will begin to work on g-lite (with help we may be able to accelerate that). Other standards-compliant endpoints are welcome in the future, but not part of the current funding plan. I’m writing the PGI and GIN working groups to see if there is interest in using these resources (endpoints) as a part of either the GIN or PGI work, in particular in demonstrations or projects for OGF in October or SC in November. One of the key differences between these endpoints and others is that they can be expected to persist. These resources will not go away when a demo is done. They will be there as a testbed for future application and middleware development (e.g., a metascheduler that works across g-lite and Unicore 6). We RENKEI/NAREGI project are interested in the participation for interoperation demonstrations or projects for OGF and SC. We have a prototype middleware which can submit and receive jobs using the HPCBP specification. Can we have more detailed information of your endpoints(authentication, data staging method, and so on) and the participation conditions of the demonstrations/projects.

OGF’10 Demo SDSC UF UC Lille Rennes Sophia ViNe provided the necessary inter-cloud connectivity to deploy CloudBLAST across 5 Nimbus sites, with a mix of public and private subnets. Grid’5000 firewall

15 Typical Performance Study Linux, Linux on VM, Windows, Azure, Amazon Bioinformatics

Education on FutureGrid Build up tutorials on supported software Support development of curricula requiring privileges and systems destruction capabilities that are hard to grant on conventional TeraGrid Offer suite of appliances (customized VM based images) supporting online laboratories Supporting ~200 students in Virtual Summer School on “Big Data” July with set of certified images – first offering of FutureGrid 101 Class; TeraGrid ‘10 “Cloud technologies, data-intensive science and the TG”; CloudCom conference tutorials Nov 30-Dec Experimental class use fall semester at Indiana, Florida and LSU

University of Arkansas Indiana University University of California at Los Angeles Penn State Iowa State Univ.Illinois at Chicago University of Minnesota Michigan State Notre Dame University of Texas at El Paso IBM Almaden Research Center Washington University San Diego Supercomputer Center University of Florida Johns Hopkins July 26-30, 2010 NCSA Summer School Workshop Students learning about Twister & Hadoop MapReduce technologies, supported by FutureGrid.

FutureGrid Layered Software Stack User Supported Software usable in Experiments e.g. OpenNebula, Charm++, Other MPI, Bigtable

Software Components Portals including “Support” “use FutureGrid” “Outreach” Monitoring – INCA, Power (GreenIT) Experiment Manager: specify/workflow Image Generation and Repository Intercloud Networking ViNE Virtual Clusters built with virtual networks Performance library Rain or Runtime Adaptable InsertioN Service: Schedule and Deploy images Security (including use of isolated network), Authentication, Authorization,

FutureGrid Software Architecture Flexible Architecture allows one to configure resources based on images Managed images allows to create similar experiment environments Experiment management allows reproducible activities Through our modular design we allow different clouds and images to be “rained” upon hardware. Note will eventually be supported at “TeraGrid Production Quality” Will support deployment of “important” middleware including TeraGrid stack, Condor, BOINC, gLite, Unicore, Genesis II, MapReduce, Bigtable ….. – Will accumulate more supported software as system used! Will support links to external clouds, GPU clusters etc. – Grid5000 initial highlight with OGF29 Hadoop deployment over Grid5000 and FutureGrid – Interested in more external system collaborators!

Dynamic provisioning Examples Need to provision – Linux or Windows O/S – Linux or (Windows O/S) on Hypervisors (KVM, Xen, ScaleMP) – Appliances – O/S plus application/middleware on bare-metal or hypervisors Give me a virtual cluster with 30 nodes based on Xen Give me 15 KVM nodes each in Chicago and Texas linked to Azure and Grid5000 Give me a Eucalyptus environment with 10 nodes Give 32 MPI nodes running on first Linux and then Windows with Cray iDataPlex Dell comparisons Give me a Hadoop or Dryad environment with 160 nodes – Compare with Amazon and Azure Give me a 1000 BLAST instances linked to Grid5000 Give me two 8 node (64 core) ScaleMP instances on Alamo and India

Dynamic Provisioning Experiment Logical View

Dynamic Provisioning Results Time elapsed between requesting a job and the jobs reported start time on the provisioned node. The numbers here are an average of 2 sets of experiments. Time minutes Number of nodes

Provisioning times for nodes in a 32 node request The nodes took an average of 3 minutes and 45 seconds to switch from the stateful to stateless image with a standard deviation of 14 seconds. Time minutes

Phase III Process View

Security Issues Need to provide dynamic flexible usability and preserve system security Still evolving process but initial approach involves Encouraging use of “as a Service” approach e.g. “Database as a Software” not “Database in your image”; clearly possible for some cases as in “Hadoop as a Service” – Commercial clouds use aaS for database, queues, tables, storage ….. – Makes complexity linear in #features rather than exponential if need to support all images with or without all features Have a suite of vetted images (here images includes customized appliances) that can be used by users with suitable roles – Typically do not allow root access; can be VM or not VM based – Can create images and requested that they be vetted “Privileged images” (e.g. allow root access) use VM’s and network isolation

FutureGrid Interaction with Commercial Clouds We support experiments that link Commercial Clouds and FutureGrid with one or more workflow environments and portal technology installed to link components across these platforms We support environments on FutureGrid that are similar to Commercial Clouds and natural for performance and functionality comparisons – These can both be used to prepare for using Commercial Clouds and as the most likely starting point for porting to them – One example would be support of MapReduce-like environments on FutureGrid including Hadoop on Linux and Dryad on Windows HPCS which are already part of FutureGrid portfolio of supported software We develop expertise and support porting to Commercial Clouds from other Windows or Linux environments We support comparisons between and integration of multiple commercial Cloud environments – especially Amazon and Azure in the immediate future We develop tutorials and expertise to help users move to Commercial Clouds from other environments

FutureGrid Viral Growth Model Users apply for a project Users improve/develop some software in project This project leads to new images which are placed in FutureGrid repository Project report and other web pages document use of new images Images are used by other users And so on ad infinitum ………

200 papers submitted to main track; 4 days of tutorials