Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:

Similar presentations


Presentation on theme: "CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:"— Presentation transcript:

1 CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL: http://www.csee.ogi.edu/~zak/cs506-pslc/http://www.csee.ogi.edu/~zak/cs506-pslc/

2 Purpose of Course This course aims to provide theoretical foundations and practical experience in distributed algorithms. Examples will be drawn from speech and language processing, machine learning, optimization, and graph theory. Though we will make heavy use of MapReduce and Hadoop, this is not a course on Hadoop. Problem Solving with Large Clusters1

3 Structure of Course Introductory lectures Reading discussions: Students will take turns presenting papers and will be responsible for up to 2 papers. Homework assignments In-class discussion of assignment solutions by students and in-class laboratory projects. Course project: There will be no final exam. Instead, the course requires a final project of interest to student, chosen in consultation with the instructor. The project requires a written report and a final presentation. Problem Solving with Large Clusters2

4 MapReduce “How is Condor different from MapReduce”? Condor (and qsub, and their kin) is a system for parallelizing serial programs: –It makes no assumptions about the input- output behavior of the programs, nor does it directly support combination of the outputs –The user decides how to split up the task vis-à- vis the input data Problem Solving with Large Clusters3

5 MapReduce MapReduce provides a framework whereby –data are first processed by multiple instances of a mapper. The system decides how data are assigned to mappers –the output of a mapper is a set of key, value pairs, which are then passed to multiple instances of a reducer, which aggregate the results of the mappers Problem Solving with Large Clusters4

6 MapReduce Details Note: Unless otherwise noted, all figures are from Jimmy Lin & Chris Dyer, Data Intensive Text Processing with MapReduce. 2010, Morgan & Claypool Problem Solving with Large Clusters5

7 Working Assumptions Assume failures are common Move processing to the data Process data sequentially – avoid random access Hide system-level details from the application developer Seamless scalability Problem Solving with Large Clusters6

8 Functional Programming: Map and Fold Problem Solving with Large Clusters7

9 Functional Programming in Lisp map and fold: >(defun square (n) (* n n)) SQUARE >(defun sum (n1 n2) (+ n1 n2)) SUM >(reduce 'sum (map 'list 'square '(1 2 3))) 14 Problem Solving with Large Clusters8

10 MapReduce Mapper and reducer have the signatures: Mappers emit key-value pairs in parallel Output of mappers is shuffled and sorted by keys Tuples with same keys are passed to the same reducer Reducers output lists of key-value pairs Problem Solving with Large Clusters9

11 Simplified View of Map Reduce Problem Solving with Large Clusters10

12 Simple Word Counter Problem Solving with Large Clusters11

13 Partitioners and Combiners Partitioners divide up the intermediate key space and assign keys to reducers –This is commonly done by hashing the key and assigning modulo the number of reducers For many tasks some reducers may end up getting much more work than others. –Why? Combiners are a further optimization that allow for local aggregation before shuffle/sort Problem Solving with Large Clusters12

14 Fuller View of Map Reduce Problem Solving with Large Clusters13

15 Important Points key-value pairs with the same key will be sent to the same reducer, but no guarantee which reducer will be assigned which key combiners must accept and emit data in the same format as the output of the mapper there is no guarantee how many times a combiner will run, if at all Problem Solving with Large Clusters14

16 Programmer has little control over: Where a mapper or reducer runs (i.e., on which node in the cluster). When a mapper or reducer begins or finishes. Which input key-value pairs are processed by a specific mapper. Which intermediate key-value pairs are processed by a specific reducer. (Lin & Dyer, p. 37) Problem Solving with Large Clusters15

17 Programmer can control: The ability to construct complex data structures as keys and values to store and communicate partial results. The ability to execute user-specified initialization code at the beginning of a map or reduce task, and the ability to execute user- specified termination code at the end of a map or reduce task. The ability to preserve state in both mappers and reducers across multiple input or intermediate keys. The ability to control the sort order of intermediate keys, and therefore the order in which a reducer will encounter particular keys. The ability to control the partitioning of the key space, and therefore the set of keys that will be encountered by a particular reducer. (Lin & Dyer, p. 38) Problem Solving with Large Clusters16

18 Word Counting Again Problem Solving with Large Clusters17 Problem: each word encountered in the collection gets passed across the network to the reducers

19 Mapper-side Aggregation Problem Solving with Large Clusters18

20 Mapper-side aggregation across documents Problem Solving with Large Clusters19

21 Issues with Mapper-side aggregation Behavior may depend on the order in which key-value pairs are encountered There is a scalability bottleneck: one must have enough memory for the data- structures that store the counts –Heap’s law predicts that vocabularies never stop growing –Common work-arounds include flushing data when the structures grow too large Problem Solving with Large Clusters20

22 Example with Combiners Problem Solving with Large Clusters21

23 Combiner Implementation: First Version Problem Solving with Large Clusters22

24 Combiner Implementation: Correct Version Problem Solving with Large Clusters23

25 In-Mapper Combining Problem Solving with Large Clusters24

26 Word co-occurrences: Pairs Problem Solving with Large Clusters25

27 Word-cooccurrences: Stripes Problem Solving with Large Clusters26

28 Efficiency Issues Problem Solving with Large Clusters27

29 Efficiency Issues Problem Solving with Large Clusters28

30 Relative Frequencies Advantage of stripes approach: –Counts of all words cooccurring with each target word are in the stripes Special partitioner needed for pairs approach: –Must ensure that all of the (w, x) get sent to the same reducer Problem Solving with Large Clusters29

31 The (w, *) key: “order inversion” Problem Solving with Large Clusters30 Insight: convert computation sequence into a sorting problem

32 Secondary Sorting Google’s M-R allows for a secondary sort on values; Hadoop doesn’t Sensor data: Emit sensor+time value and a custom partitioner: Problem Solving with Large Clusters31

33 Relational Joins Two relations, S, T: Problem Solving with Large Clusters32

34 Reduce-side Join One-to-one join: One-to-many join, do sort and partition before passing to reducer: Problem Solving with Large Clusters33

35 Reduce-side Join Many-to-many join Basic insight: repartition the join key. –Inefficient since requires shuffling both datasets across the network (Lin & Dyer, p. 62) Problem Solving with Large Clusters34

36 Map-side Join Map over one of the datasets (the larger one) and inside the mapper read the corresponding part of the other dataset to perform the merge join (Lin & Dyer, p. 62) No reducer needed Problem Solving with Large Clusters35

37 Inverted Indexing Terms associated with a list of documents and payloads – information about occurrences of the term in the document Problem Solving with Large Clusters36

38 Inverted Indexing Problem Solving with Large Clusters37

39 Illustration of Baseline Algorithm Problem Solving with Large Clusters38

40 Problems with Baseline The baseline algorithm assumes all postings associated with the same term can be held in memory –This is not going to work for large sets of documents (e.g. the Web) Instead of emitting we instead emit: This requires a custom partitioner to ensure that each term gets sent to the same reducer Problem Solving with Large Clusters39

41 Scalable Inverted Indexer Problem Solving with Large Clusters40

42 Index Compression Naïve representation: –[(5, 2), (7, 3), (12, 1), (49, 1), (51, 2),...] First trick: encode differences –[(5, 2), (2, 3), (5, 1), (37, 1), (2, 2),...] –d-gaps could be as large as |D|-1 Need a method that encodes smaller numbers with less space Problem Solving with Large Clusters41

43 Golomb and γ codes Problem Solving with Large Clusters42 Length in unary Remainder in binary

44 Golomb Codes Problem Solving with Large Clusters43 (Lin & Dyer, p. 78)

45 Index Encoding D-gaps use Golomb compression: Term frequencies are encoded with γ codes Problem Solving with Large Clusters44

46 Retrieval MapReduce is a poor solution to retrieval: –Retrieval depends upon random access, exactly the opposite of the serial access model assumed for MapReduce Two approaches: –Term partitioning: Each server is responsible for a subset of the terms –Document partitioning: Each server is responsible for a subset of the documents Problem Solving with Large Clusters45

47 Term vs. Document Partitioning Problem Solving with Large Clusters46

48 Term vs. Document Partitioning Document partitioning requires a query broker Term partitioning: for a query containing 3 terms q1, q2, q3, the broker forwards query to the server that holds the postings for q1. Server traverses appropriate postings list and computes partial query–document scores, stored in the accumulators. The accumulators are passed to the server that holds the postings associated with q2 for additional processing, etc … (Lin & Dyer p. 81) Google uses document partitioning Problem Solving with Large Clusters47

49 Hadoop Hadoop Distributed File System (HDFS) Master-Slave relationship: –Namenode (master) manages metadata, directory structure, file- to-block mapping, block location, permissions –Datanode (slave) manage actual data blocks Client contacts namenode to get pointer to block id and datanode Client then contacts datanode Multiple copies (typically 3) of data are stored Strong advantage to having a few big files rather than lots of little files: –More efficient use of namenode memory –One mapper per file, so lots of little files means lots of mappers –A lot of across-the-network copies during shuffle/sort phase Problem Solving with Large Clusters48

50 Hadoop Distributed File System (HDFS) Problem Solving with Large Clusters49

51 Hadoop Architecture Problem Solving with Large Clusters50

52 MapReduce Art Problem Solving with Large Clusters51

53 Reading Assignments Lin & Dyer, chs. 1-4 White, chs. 1-3 Problem Solving with Large Clusters52


Download ppt "CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:"

Similar presentations


Ads by Google