SALSASALSASALSASALSA FutureGrid Venus-C June 2 2010 Geoffrey Fox

Slides:



Advertisements
Similar presentations
FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
Advertisements

FutureGrid Overview NSF PI Science of Cloud Workshop Washington DC March Geoffrey Fox
Cosmic Issues and Analysis of External Comments on FutureGrid TG11 Salt Lake City July Geoffrey Fox
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
Student Visits August Geoffrey Fox
Clouds Cyberinfrastructure and Collaboration CTS2010 Chicago IL May Geoffrey Fox
Overview Presented at OGF31 Salt Lake City, July 2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo Contact:
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
FutureGrid Summary TG’10 Pittsburgh BOF on New Compute Systems in the TeraGrid Pipeline August Geoffrey Fox
SC2010 Gregor von Laszewski (*) (*) Assistant Director of Cloud Computing, CGL, Pervasive Technology Institute.
FutureGrid Summary FutureGrid User Advisory Board TG’10 Pittsburgh August Geoffrey Fox
SCSI: Platforms & Foundations: Cyberinfrastructure Socially Coupled Systems & Informatics: Science, Computing & Decision Making in a Complex Interdependent.
FutureGrid Overview David Hancock HPC Manger Indiana University.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Clouds and FutureGrid MSI-CIEC All Hands Meeting SDSC January Geoffrey Fox
FutureGrid Overview CTS Conference 2011 Philadelphia May Geoffrey Fox
Cloud Data mining and FutureGrid SC10 New Orleans LA AIST Booth November Geoffrey Fox
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
Overview of Cyberinfrastructure Northeastern Illinois University Cyberinfrastructure Day August Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
Science of Cloud Computing Panel Cloud2011 Washington DC July Geoffrey Fox
FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Cloud Architecture for Earthquake Science 7 th ACES International Workshop 6th October 2010 Grand Park Otaru Otaru Japan Geoffrey Fox
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
FutureGrid 100 and 101 (part one) Virtual School for Computational Science and Engineering July Geoffrey Fox
FutureGrid Design and Implementation of a National Grid Test-Bed David Hancock – HPC Manager - Indiana University Hardware & Network.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
What’s Hot in Clouds? Analyze (superficially) the ~140 Papers/Short papers/Workshops/Posters/Demos in CloudCom Each paper may fall in more than one category.
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
FutureGrid Overview Geoffrey Fox
FutureGrid SC10 New Orleans LA IU Booth November Geoffrey Fox
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
SALSA HPC Group School of Informatics and Computing Indiana University.
FutureGrid Overview Geoffrey Fox
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
FutureGrid Cyberinfrastructure for Computational Research.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
Cloud Technologies and Data Intensive Applications INGRID 2010 Workshop Poznan Poland May Geoffrey Fox
Cloud Technologies and Data Intensive Applications INGRID 2010 Workshop Poznan Poland May Geoffrey Fox
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
Research in Grids and Clouds and FutureGrid Melbourne University September Geoffrey Fox
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
FutureGrid Overview Geoffrey Fox
Tutorial Presented at TG2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo, Kate Keahey, Andrew Younge Contact:
FutureGrid BOF Overview TG 11 Salt Lake City July Geoffrey Fox
FutureGrid NSF September Geoffrey Fox
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Recipes for Success with Big Data using FutureGrid Cloudmesh SDSC Exhibit Booth New Orleans Convention Center November Geoffrey Fox, Gregor von.
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
SALSA HPC Group School of Informatics and Computing Indiana University Workshop on Petascale Data Analytics: Challenges, and.
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
Private Public FG Network NID: Network Impairment Device
FutureGrid: a Grid Testbed
FutureGrid and Applications
PolarGrid and FutureGrid
Cloud versus Cloud: How Will Cloud Computing Shape Our World?
Presentation transcript:

SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox Director, Digital Science Center, Pervasive Technology Institute Associate Dean for Research and Graduate Studies, School of Informatics and Computing Indiana University Bloomington

SALSASALSA FutureGrid Concepts Support development of new applications and new middleware using Cloud, Grid and Parallel computing (Nimbus, Eucalyptus, Hadoop, Globus, Unicore, MPI, OpenMP. Linux, Windows …) looking at functionality, interoperability, performance, research and education Put the “science” back in the computer science of grid/cloud computing by enabling replicable experiments Open source software built around Moab/xCAT to support dynamic provisioning from Cloud to HPC environment, Linux to Windows ….. with monitoring, benchmarks and support of important existing middleware June 2010 Initial users; September 2010 All hardware (except IU shared memory system) accepted and major use starts; October 2011 FutureGrid allocatable via TeraGrid process

SALSASALSA FutureGrid Hardware FutureGrid has dedicated network (except to TACC) and a network fault and delay generator Can isolate experiments on request; IU runs Network for NLR/Internet2 (Many) additional partner machines will run FutureGrid software and be supported (but allocated in specialized ways) 3

SALSASALSA FutureGrid Partners Indiana University (Architecture, core software, Support) Purdue University (HTC Hardware) San Diego Supercomputer Center at University of California San Diego (INCA, Monitoring) University of Chicago/Argonne National Labs (Nimbus) University of Florida (ViNE, Education and Outreach) University of Southern California Information Sciences (Pegasus to manage experiments) University of Tennessee Knoxville (Benchmarking) University of Texas at Austin/Texas Advanced Computing Center (Portal) University of Virginia (OGF, Advisory Board and allocation) Center for Information Services and GWT-TUD from Technische Universtität Dresden. (VAMPIR) Blue institutions have FutureGrid hardware 4

SALSASALSA FutureGrid: a Grid Testbed IU Cray operational, IU IBM (iDataPlex) completed stability test May 6 UCSD IBM operational, UF IBM stability test completes ~ June 7 Network, NID and PU HTC system operational UC IBM stability test completes ~ June 5; TACC Dell awaiting installation (components delivered) NID : Network Impairment Device Private Public FG Network

SALSASALSA Dynamic Provisioning 6

SALSASALSA Interactions with Venus-C Running jobs on FutureGrid as well as Venus-C; compare performance; compare ease of use Running Workflow linking processes on Azure and FutureGrid Using MapReduce on Azure/FutureGrid Comparing Azure Table with Hbase on Linux/Nimbus/Windows running on FutureGrid? Make Venus-C framework target FutureGrid

SALSASALSA Sequence Assembly in the Clouds Cap3 parallel efficiency Cap3 – Per core per file (458 reads in each file) time to process sequences

SALSASALSA Cost to assemble to process 4096 FASTA files ~ 1 GB / reads (458 readsX4096) Amazon AWS total :11.19 $ Compute 1 hour X 16 HCXL (0.68$ * 16)= $ SQS messages = 0.01 $ Storage per 1GB per month = 0.15 $ Data transfer out per 1 GB = 0.15 $ Azure total : $ Compute 1 hour X 128 small (0.12 $ * 128) = $ Queue messages = 0.01 $ Storage per 1GB per month = 0.15 $ Data transfer in/out per 1 GB = 0.10 $ $ Tempest (amortized) : 9.43 $ – 24 core X 32 nodes, 48 GB per node – Assumptions : 70% utilization, write off over 3 years, include support

SALSASALSA

SALSASALSA Dynamic Virtual Clusters Switchable clusters on the same hardware (~5 minutes between different OS such as Linux+Xen to Windows+HPCS) Support for virtual clusters SW-G : Smith Waterman Gotoh Dissimilarity Computation as an pleasingly parallel problem suitable for MapReduce style application Pub/Sub Broker Network Summarizer Switcher Monitoring Interface iDataplex Bare- metal Nodes XCAT Infrastructure Virtual/Physical Clusters Monitoring & Control Infrastructure iDataplex Bare-metal Nodes (32 nodes) iDataplex Bare-metal Nodes (32 nodes) XCAT Infrastructure Linux Bare- system Linux Bare- system Linux on Xen Windows Server 2008 Bare-system SW-G Using Hadoop SW-G Using DryadLINQ Monitoring Infrastructure Dynamic Cluster Architecture

SALSASALSA