4 The Nimbus Workspace Service PoolnodePoolnodePoolnodeVWSServicePoolnodePoolnodePoolnodePoolnodePoolnodePoolnodePoolnodePoolnodePoolnode
5 The Nimbus Workspace Service The workspace service publishesinformation about each workspacePoolnodePoolnodePoolnodeVWSServicePoolnodePoolnodePoolnodeUsers can find outinformation about theirworkspace (e.g. what IPthe workspace was bound to)PoolnodePoolnodePoolnodePoolnodePoolnodePoolnodeUsers can interact directly with their workspaces the same way the would with a physical machine.
6 Nimbus: Cloud Computing for Science Allow providers to build cloudsWorkspace Service: a service providing EC2-like functionalityWSRF and WS (EC2) interfacesAllow users to use cloud computingDo whatever it takes to enable scientists to use IaaSContext Broker: turnkey virtual clusters,Also: protocol adapters, account managers and scaling toolsAllow developers to experiment with NimbusFor research or usability/performance improvementsOpen source, extensible softwareCommunity extensions and contributions: UVIC (monitoring), IU (EBS, research), Technical University of Vienna (privacy, research)Nimbus:3/27/ The Nimbus Toolkit: http//workspace.globus.org
7 Clouds for Science: a Personal Perspective OOIstartsFirst STARproductionrun on EC2ExperimentalClouds forScienceXen releasedScience CloudsavailableEC2 released2004200620082010“A Case for Grid Computing on VMs”In-Vigo, VIOLIN, DVEs,Dynamic accountsPolicy-driven negotiationFirst NimbusreleaseNimbusContext Brokerrelease
8 STAR experimentWork by Jerome Lauret, Leve Hajdu, Lidia Didenko (BNL), Doug Olson (LBNL)STAR: a nuclear physics experiment at Brookhaven National LaboratoryStudies fundamental properties of nuclear matterProblems:ComplexityConsistencyAvailability
9 STAR Virtual Clusters Virtual resources A virtual OSG STAR cluster: OSG headnode (gridmapfiles, host certificates, NFS, Torque), worker nodes: SL4 + STAROne-click virtual cluster deployment via Nimbus Context BrokerFrom Science Clouds to EC2 runsRunning production codes since 2007The Quark Matter run: producing just-in-time results for a conference:
10 Priceless? Compute costs: $ 5,630.30 Data transfer costs: $ 136.38 300+ nodes over ~10 days,Instances, 32-bit, 1.7 GB memory:EC2 default: 1 EC2 CPU unitHigh-CPU Medium Instances: 5 EC2 CPU units (2 cores)~36,000 compute hours totalData transfer costs: $Small I/O needs : moved <1TB of data over durationStorage costs: $ 4.69Images only, all data transferred at run-timeProducing the result before the deadline……$ 5,771.37
11 Cloud Bursting ALICE: Elastically Extend a Grid Elastically Extend a clusterReact to EmergencyOOI: Provide a Highly Available Service
12 Hadoop in the Science Clouds U of ChicagoU of FloridaHadoop cloudPurduePapers:“CloudBLAST: Combining MapReduce and Virtualization on Distributed Resources for Bioinformatics Applications” by A. Matsunaga, M. Tsugawa and J. Fortes. eScience 2008.“Sky Computing”, by K. Keahey, A. Matsunaga, M. Tsugawa, J. Fortes, to appear in IEEE Internet Computing, September 20093/27/ The Nimbus Toolkit: http//workspace.globus.org
13 Genomics: Dramatic Growth in the Need for Processing moving from big science (at centers) to many playersdemocratization of sequencingwe currently have more data than we can handleGB454SolexaNext gen SolexaFrom Folker Meyer, “The M5 Platform”
14 Ocean Observatory Initiative CI: Linking the marine infrastructure to science and users
15 Benefits and Concerns Now Environment per user (group)On-demand access“We don’t want to run datacenters!”Capital expense -> operational expenseGrowth and cost managementConcernsPerformance: “Cloud computing offerings are good, but they do not cater to scientific needs”Price and stabilityPrivacyWhat are we hearing from people today?b. What prevents scientists from using clouds? Is cloud computing suitable for science? Ed Lazowska says 85% suitable.c. What specific challenges make it hard?d. What will be my “fusion slide” 5 years from now when I give a talk? What communities will have inspired the next advance in science?
16 Performance (Hardware) From Walker, ;login: 2008ChallengesBig I/O degradationSmall CPU degradationEthernet vs InfinibandNo OS bypass drivers, no infinibandNew developmentOS bypass driversPotentially add graph from Walker’s paperWhat is the incoming/outgoing bandwidth exactly?Performance is half the problem – variability is worse!Relatively small scale: 800 nodes
17 Performance (Configuration) Trade-off: CPU vs. I/OVMM configurationSharing between VMs“VMM latency”Performance instabilityMulti-core opportunityA price performance trade-offFrom Santos et al., Xen Summit 2007Ultimately a change in mindset: “performance matters”!
18 From Performance… …to Price-Performance “Instance” definitions for scienceI/O oriented, well-describedCo-location of instancesTo stack or not to stackprice pointCPU: on-demand, reserved, spot pricingData: high/low availability?Data access performance and availabilityPricingFiner and coarser grained
19 Availability, Utilization, and Cost/Price Most of science today is done in batchThe cost of on-demandOverprovisioning or request failure?Clouds + HTC = marriage made in heaven?Spot pricingcourtesy of Rob Simmonds,example of WestGrid utilization
20 Data in the Cloud Storage clouds and SANs Challenges AWS Simple Storage Service (S3)AWS Elastic Block Store (EBS)ChallengesBandwidth performance and sharingSharing data between usersSharing storage between instancesAvailabilityData Privacy (the really hard issue)Descher et al., Retaining Data Control in Infrastructure Clouds, ARES (the International Dependability Conference), 2009.
21 What if my IaaS provider Cloud MarketsIs computing fungible?Can it be fungible?Diverse paradigms: IaaS, PaaS, SaaS, and other aaS..InteroperabilityComparison basisWhat if my IaaS providerdouble their pricestomorrow?
22 IaaS Cloud Interoperability Cloud standardsOCCI (OGF), OVF (DMTF), and many more…Cloud-standards.orgCloud abstractionsDeltacloud, jcloud, libcloud, and many more…Appliances, not imagesrBuilder, BCFG2, CohesiveFT, Puppet, and many more…
23 Can you give us a better deal? In terms of…Performance, deployment, data access, priceBased on relevant scientific benchmarksComprehensive, current, and publicThe Bitsource: CloudServers vs EC2 LKC Cost by Instance
24 Science Cloud Ecosystem Goal: time to science in clouds -> zeroScientific appliancesNew tools“turnkey clusters”, cloud bursting, etc.Open source importantChange in the mindsetEducation and adaptation“Profiles”New paradigm requires new approachesTeach old dogs some new tricks
26 The FutureGrid Project NSF-funded experimental testbed (incl. clouds)~6000 total cores connected by private networkApply at:
27 Parting Thoughts Opinions split into extremes: “Cloud computing is done!”“Infrastructure for toy problems”The truth is in the middleWe know that IaaS represents a viable paradigm for a set of scientific applications……it will take more work to enlarge that setExperimentation and open source are a catalystChallenge: let’s make it work for science!