Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge.

Similar presentations


Presentation on theme: "Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge."— Presentation transcript:

1 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Kansas State University Olathe Thursday 14 August 2014 William H. Hsu http://www.cis.ksu.edu/~bhsu Laboratory for Knowledge Discovery in Databases, Kansas State University http://www.kddresearch.org Acknowledgements K-State Manhattan: Majed Alsadhan, Scott Finkeldei, Kyle Hudson, Surya Teja Kallumadi K-State Olathe: Dr. Prema Arasu, Dana Reinert, Paige Adams, Cathy Danahy, Angela Cummins, Emily Surdez, Quentin New, Amy Burgess Big Data Workshop: Day 2 Part III – Hadoop Ecosystem & Tools

2 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Workshop Overview: Topics Covered Day 1: Overview & Tutorial  Survey of Big Data: Data, Tools, Methods, & Applications  Tutorial on MapReduce Algorithms & Tools Day 2: Hands-On Tutorial & Real-World Examples  Hadoop Stack in Detail: Hive, Pig, Solr & Lucene, Mesos  Other Tools & Platforms: Scala, Python Day 3: Data Mining & Visualization  More Tools: Spark, Machine Learning (Mahout & Oryx)  Graphs (Neo4j), Data/Info Visualization (Tableau)

3 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Workshop Overview: Goals Day 1: Survey Real-World Applications & Methods  Present Apache Hadoop Stack & Its Uses  Introduce MapReduce using Hands-On Examples Day 2: Delve into MapReduce Framework & Hadoop  Understand Scalding, Python Streaming  Go Over Basic Common Patterns, Dissect Code Day 3: Review Tools & State of the Field  Look at Data Mining & Visualization: Tasks, Methods  Current Research and Development in Data Science

4 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Review - Hadoop Stack: High-Level Overview (2011) Figure © 2011, R. Kalakota http://bit.ly/hadoop-stack-kalakota

5 Clemens Neudecker KB National Library of the Netherlands SCAPE & OPF Hackathon Vienna, 2 dec 2013 What is Hadoop? Hadoop Driven Digital Preservation

6 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 6 Timeline Dec 2004: Dean/Ghemawat (Google) MapReduce paper 2005: Doug Cutting and Mike Cafarella (Yahoo) create Hadoop, at first only to extend Nutch (the name is derived from Doug’s son’s toy elephant) 2006: Yahoo runs Hadoop on 5-20 nodes This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

7 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 7 Timeline March 2008: Cloudera founded July 2008: Hadoop wins TeraByte sort benchmark (1 st time a Java program won this competition) April 2009: Amazon introduce “Elastic MapReduce” as a service on S3/EC2 June 2011: Hortonworks founded This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

8 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 8 Timeline 27 dec 2011: Apache Hadoop release 1.0.0 June 2012: Facebook claim “biggest Hadoop cluster”, totalling more than 100 PetaBytes in HDFS 2013: Yahoo runs Hadoop on 42,000 nodes, computing about 500,000 MapReduce jobs per day 15 oct 2013: Apache Hadoop release 2.2.0 (YARN) This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

9 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 9 Contributions 2006 - 2011 (Cf. http://hortonworks.com/blog/reality-check-contributions-to-apache-hadoop/)http://hortonworks.com/blog/reality-check-contributions-to-apache-hadoop/ This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

10 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 10 “Core” Hadoop Hadoop Common (formerly Hadoop Core) Hadoop MapReduce Hadoop YARN (MapReduce 2.0) Hadoop Distributed File System (HDFS) This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

11 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 11 The wider Hadoop Ecosystem This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137). Ambari, Zookeeper (managing & monitoring) HBase, Cassandra (database) Hive, Pig (data warehouse and query language) Mahout (machine learning) Chukwa, Avro, Oozie, Giraph, and many more

12 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 12 The wider Hadoop Ecosystem http://www.slideshare.net/cloudera/the-hadoop-stack-then-now-and-in-the-future-eli- collins-charles-zedlewski-cloudera

13 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases “Hadoop is a hammer. Start by figuring out what house you‘re gonna build.“ Alistair Croll “If all you have is a hammer, throw away everything that is not a nail!“ Jimmy Lin 13 “Hadoop is a hammer” This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

14 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 14 MapReduce in 41 words (including “library”) Goal: count the number of books in the library. Map: You count up shelf #1, I count up shelf #2. (The more people we get, the faster this part goes) Reduce: We all get together and add up our individual counts. (Cf. http://www.chrisstucchio.com/blog/2011/mapreduce_explained.html) http://www.chrisstucchio.com/blog/2011/mapreduce_explained.html This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

15 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases MapReduce in a nutshell 15 Task1 Task 2 Task 3 Output data Aggregated Result © Sven Schlarb

16 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 16 MapReduce “v1” issues JobTracker as a single-point of failure Deficiencies in scalability, memory consumption, threading-model, reliability and performance (https://issues.apache.org/jira/browse/MAPREDUCE-278)https://issues.apache.org/jira/browse/MAPREDUCE-278 Aim to support programming paradigms other than MapReduce (BSP) This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

17 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 17 MapReduce vs YARN (Cf. http://hortonworks.com/blog/office-hours-qa-on-yarn-in-hadoop-2/)http://hortonworks.com/blog/office-hours-qa-on-yarn-in-hadoop-2/ This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

18 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 18 When to use Hadoop? Generally, always when “standard tools” don’t work anymore because of sheer data size (rule of thumb: if your data fits on a regular hard drive, your better off sticking to Python/SQL/Bash/etc.!) Aggregation across large data sets: use the power of Reducers! Large-scale ETL operations (extract, transform, load) This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

19 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Tom White: Hadoop. The Definitive Guide (get 3rd ed. for extra YARN chapter) YARN explained (really quite well): http://blog.cloudera.com/blog/2012/02/mapreduce-2-0-in-hadoop-0-23/ http://blog.cloudera.com/blog/2012/02/mapreduce-2-0-in-hadoop-0-23/ Jimmy Lin: Text Processing with MapReduce: http://lintool.github.io/MapReduceAlgorithms/ed1n.html http://lintool.github.io/MapReduceAlgorithms/ed1n.html Reading 19 This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

20 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases 20 Happy Hadooping! This work was partially supported by the SCAPE Project. The SCAPE project is co‐funded by the European Union under FP7 ICT‐2009.4.1 (Grant Agreement number 270137).

21 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Lucene/Solr Architecture Apache Lucene /select/spellXMLCSVXML Binar y JSON Data Import Handler (SQL/RSS) Extracting Request Handler (PDF/WORD) CachingFaceting Query Parsing Apache Tika binary/admin High- lighting Schema Index Replication Request HandlersUpdate HandlersResponse Writers Query Search Components Spelling Faceting Highlightin g Signature Logging Update Processors Indexing Config Debug Statistics More like this Distributed Search Clustering FilteringSearch Core Search IndexReader/Searcher Indexing IndexWriter Text Analysis Analysis

22 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Lucene/Solr plugins RequestHandlers – handle a request at a URL like /select SearchComponents – part of a SearchHandler, a componentized request handler  Includes, Query, Facet, Highlight, Debug, Stats  Distributed Search capable UpdateHandlers – handle an indexing request Update Processor Chains – per-handler componentized chain that handle updates Query Parser plugins  Mix and match query types in a single request  Function plugins for Function Query Text Analysis plugins: Analyzers, Tokenizers, TokenFilters ResponseWriters serialize & stream response to client 22

23 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Lucene/Solr Query Plugin Architecture 23 schema.xml solrconfig.xml Function QParser sqrt sum pow custom max log MyCustom QParser DisMax QParser Function Range Q XML QParser Lucene QParser <parser name=“mycustom” … <func name=“custom” class=… Whitespace Tokenizer Analyzer for “title” CustomFilter SynonymFilter Porter Stemmer // declaratively defines types // and analyzers for fields <field name=“title” type=“text1” <field name=“cust1” class=… Analyzer for “cust1” (potentially completely custom architecture not using tokenizer/filters) Declarative Analysis per-field - Tokenizer to split text - TokenFilter to transform tokens - Analyzer for completely custom - Separate query / index analyzer QParser plugins - Support different query syntaxes - Support different query execution - Function Query supports pluggable custom functions - Excellent support for nesting/mixing different query types in the same request.

24 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Lucene/Solr Request Plugins 24 /select RequestHandler Query Component Facet Component Highlight Component Debug Component Distributed Search MoreLikeThisStatisticsTerms SpellcheckTermVectorQueryElevation My Custom Binary response writer JSON respons e writer Custom response writer Request Handler (non- component based) /admin/luke Request Handler (custom) /mypath XML response writer XSLT response writer http://.../select?q=cheese&wt=json Query Response {“response”={ “docs”={ Additional plug-n-play search components Clustering

25 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Lucene/Solr Indexing 25 XML Update Handler CSV Update Handler /update/update/csv XML Update with custom processor chain /update/xml Extracting RequestHandler (PDF, Word, …) /update/extract Lucene Index Data Import Handler Database pull RSS pull Simple transforms SQL DB RSS feed Remove Duplicates processor Logging processor Index processor Custom Transform processor PDF HTTP POST pull Update Processor Chain (per handler) Lucene Text Index Analyzers

26 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos A Platform for Fine-Grained Resource Sharing in the Data Center Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony Joseph, Randy Katz, Scott Shenker, Ion Stoica University of California, Berkeley

27 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Pig Background Rapid innovation in cluster computing frameworks Dryad Pregel Percolator C IEL

28 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Problem Rapid innovation in cluster computing frameworks No single framework optimal for all applications Want to run multiple frameworks in a single cluster  …to maximize utilization  …to share data between frameworks

29 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Where We Want to Go Hadoop Pregel MPI Shared cluster Today: static partitioningMesos: dynamic sharing

30 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Solution Mesos is a common resource sharing layer over which diverse frameworks can run Mesos Node Hadoop Pregel … Node Hadoop Node Pregel …

31 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Other Benefits of Mesos Run multiple instances of the same framework  Isolate production and experimental jobs  Run multiple versions of the framework concurrently Build specialized frameworks targeting particular problem domains  Better performance than general-purpose abstractions

32 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Outline Mesos Goals and Architecture Implementation Results Related Work

33 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos Goals High utilization of resources Support diverse frameworks (current & future) Scalability to 10,000’s of nodes Reliability in face of failures Resulting design: Small microkernel-like core that pushes scheduling logic to frameworks

34 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Design Elements Fine-grained sharing:  Allocation at the level of tasks within a job  Improves utilization, latency, and data locality Resource offers:  Simple, scalable application-controlled scheduling mechanism

35 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Element 1: Fine-Grained Sharing Framework 1 Framework 2 Framework 3 Coarse-Grained Sharing (HPC): Fine-Grained Sharing (Mesos): + Improved utilization, responsiveness, data locality Storage System (e.g. HDFS) Fw. 1 Fw. 3 Fw. 2 Fw. 1 Fw. 3 Fw. 2 Fw. 3 Fw. 1 Fw. 2 Fw. 1 Fw. 3 Fw. 2

36 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Element 2: Resource Offers Option: Global scheduler  Frameworks express needs in a specification language, global scheduler matches them to resources + Can make optimal decisions – Complex: language must support all framework needs – Difficult to scale and to make robust – Future frameworks may have unanticipated needs

37 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Element 2: Resource Offers Mesos: Resource offers  Offer available resources to frameworks, let them pick which resources to use and which tasks to launch + Keeps Mesos simple, lets it support future frameworks - Decentralized decisions might not be optimal

38 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos Architecture MPI job MPI scheduler Hadoop job Hadoop scheduler Allocati on module Mesos master Mesos slave MPI executor Mesos slave MPI executor task Resourc e offer Pick framework to offer resources to

39 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos Architecture MPI job MPI scheduler Hadoop job Hadoop scheduler Allocati on module Mesos master Mesos slave MPI executor Mesos slave MPI executor task Pick framework to offer resources to Resourc e offer Resource offer = list of (node, availableResources) E.g. { (node1, ), (node2, ) } Resource offer = list of (node, availableResources) E.g. { (node1, ), (node2, ) }

40 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos Architecture MPI job MPI scheduler Hadoop job Hadoop scheduler Allocati on module Mesos master Mesos slave MPI executor Hadoop executor Mesos slave MPI executor task Pick framework to offer resources to task Framework- specific scheduling Resourc e offer Launches and isolates executors

41 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Optimization: Filters Let frameworks short-circuit rejection by providing a predicate on resources to be offered  E.g. “nodes from list L” or “nodes with > 8 GB RAM”  Could generalize to other hints as well Ability to reject still ensures correctness when needs cannot be expressed using filters

42 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Implementation

43 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Implementation Stats 20,000 lines of C++ Master failover using ZooKeeper Frameworks ported: Hadoop, MPI, Torque New specialized framework: Spark, for iterative jobs (up to 20× faster than Hadoop) Open source in Apache Incubator

44 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Users Twitter uses Mesos on > 100 nodes to run ~12 production services (mostly stream processing) Berkeley machine learning researchers are running several algorithms at scale on Spark Conviva is using Spark for data analytics UCSF medical researchers are using Mesos to run Hadoop and eventually non-Hadoop apps

45 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Results »Utilization and performance vs static partitioning »Framework placement goals: data locality »Scalability »Fault recovery

46 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Dynamic Resource Sharing

47 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos vs Static Partitioning Compared performance with statically partitioned cluster where each framework gets 25% of nodes FrameworkSpeedup on Mesos Facebook Hadoop Mix1.14× Large Hadoop Mix2.10× Spark1.26× Torque / MPI0.96×

48 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Ran 16 instances of Hadoop on a shared HDFS cluster Used delay scheduling [EuroSys ’10] in Hadoop to get locality (wait a short time to acquire data-local nodes) Data Locality with Resource Offers 1.7×

49 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Scalability Mesos only performs inter-framework scheduling (e.g. fair sharing), which is easier than intra- framework scheduling Result: Scaled to 50,000 emulated slaves, 200 frameworks, 100K tasks (30s len)

50 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Fault Tolerance Mesos master has only soft state: list of currently running frameworks and tasks Rebuild when frameworks and slaves re-register with new master after a failure Result: fault detection and recovery in ~10 sec

51 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Related Work HPC schedulers (e.g. Torque, LSF, Sun Grid Engine)  Coarse-grained sharing for inelastic jobs (e.g. MPI) Virtual machine clouds  Coarse-grained sharing similar to HPC Condor  Centralized scheduler based on matchmaking Parallel work: Next-Generation Hadoop  Redesign of Hadoop to have per-application masters  Also aims to support non-MapReduce jobs  Based on resource request language with locality prefs

52 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Conclusion Mesos shares clusters efficiently among diverse frameworks thanks to two design elements:  Fine-grained sharing at the level of tasks  Resource offers, a scalable mechanism for application-controlled scheduling Enables co-existence of current frameworks and development of new specialized ones In use at Twitter, UC Berkeley, Conviva and UCSF

53 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Backup Slides

54 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Framework Isolation Mesos uses OS isolation mechanisms, such as Linux containers and Solaris projects Containers currently support CPU, memory, IO and network bandwidth isolation Not perfect, but much better than no isolation

55 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Analysis Resource offers work well when:  Frameworks can scale up and down elastically  Task durations are homogeneous  Frameworks have many preferred nodes These conditions hold in current data analytics frameworks (MapReduce, Dryad, …)  Work divided into short tasks to facilitate load balancing and fault recovery  Data replicated across multiple nodes

56 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Revocation Mesos allocation modules can revoke (kill) tasks to meet organizational SLOs Framework given a grace period to clean up “Guaranteed share” API lets frameworks avoid revocation by staying below a certain share

57 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Mesos API Scheduler Callbacks resourceOffer(offerId, offers) offerRescinded(offerId) statusUpdate(taskId, status) slaveLost(slaveId) Executor Callbacks launchTask(taskDescriptor) killTask(taskId) Executor Actions sendStatus(taskId, status) Scheduler Actions replyToOffer(offerId, tasks) setNeedsOffers(bool) setFilters(filters) getGuaranteedShare() killTask(taskId)

58 Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge Discovery in Databases Hadoop Ecosystem We covered these starting Day 1 Today (Day 2) & next week (Day 3) we cover more of these Adapted from slide © 2013, M. Eltabakh, Worcester Polytechnic Institute http://bit.ly/hadoop-ecosystem-pig-eltabakh


Download ppt "Computing & Information Sciences Kansas State University Kansas State University Olathe Workshop on Big Data – August, 2014 KSU Laboratory for Knowledge."

Similar presentations


Ads by Google