MapReduce Theory, Implementation and Algorithms Hongfei Yan School of EECS, Peking University 7/2/2009 Refer to.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

Distributed Computations
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
MapReduce Technical Workshop This presentation includes course content © University of Washington Redistributed under the Creative Commons Attribution.
MapReduce Dean and Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, Vol. 51, No. 1, January Shahram.
MapReduce: Simplified Data Processing on Large Clusters J. Dean and S. Ghemawat (Google) OSDI 2004 Shimin Chen DISC Reading Group.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
Distributed Computations MapReduce
7/14/2015EECS 584, Fall MapReduce: Simplied Data Processing on Large Clusters Yunxing Dai, Huan Feng.
Lecture 2 – MapReduce: Theory and Implementation CSE 490H This presentation incorporates content licensed under the Creative Commons Attribution 2.5 License.
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce Simplified Data Processing On large Clusters Jeffery Dean and Sanjay Ghemawat.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
MapReduce: Simplified Data Processing on Large Clusters
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Map Reduce: Simplified Data Processing On Large Clusters Jeffery Dean and Sanjay Ghemawat (Google Inc.) OSDI 2004 (Operating Systems Design and Implementation)
MapReduce: Acknowledgements: Some slides form Google University (licensed under the Creative Commons Attribution 2.5 License) others from Jure Leskovik.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2011.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
MapReduce Theory, Implementation and Algorithms Hongfei Yan School of EECS, Peking University 7/1/2008 Refer to Aaron.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Mass Data Processing Technology on Large Scale Clusters Summer, 2007, Tsinghua University All course material (slides, labs, etc) is licensed under the.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
MapReduce Costin Raiciu Advanced Topics in Distributed Systems, 2012.
1 MapReduce: Theory and Implementation CSE 490h – Intro to Distributed Computing, Modified by George Lee Except as otherwise noted, the content of this.
© Spinnaker Labs, Inc. Google Cluster Computing Faculty Training Workshop Module IV: MapReduce Theory, Implementation, and Algorithms This presentation.
PPCC Spring Map Reduce1 MapReduce Prof. Chris Carothers Computer Science Department
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of.
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
SECTION 5: PERFORMANCE CHRIS ZINGRAF. OVERVIEW: This section measures the performance of MapReduce on two computations, Grep and Sort. These programs.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
MapReduce: Simplified Data Processing on Large Clusters Lim JunSeok.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
Map Reduce. Functional Programming Review r Functional operations do not modify data structures: They always create new ones r Original data still exists.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
MapReduce: Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Google Cluster Computing Faculty Training Workshop
Adapted from: Google & UWash’s Creative Common MR Deck
CS162 Operating Systems and Systems Programming Lecture 19 File Systems continued Distributed Systems April 9, 2008 Prof. Anthony D. Joseph
MapReduce Simplied Data Processing on Large Clusters
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
Distributed System Gang Wu Spring,2018.
Cloud Computing MapReduce, Batch Processing
Presentation transcript:

MapReduce Theory, Implementation and Algorithms Hongfei Yan School of EECS, Peking University 7/2/2009 Refer to Aaron Kimball’s slides

Outline Functional Programming Recap MapReduce Theory & Implementation MapReduce Algorithms

What is Functional Programming? Functional programming is style of programming in which the basic method of computation is the application of functions to arguments; A functional language is one that supports and encourages the functional style. Opinions differ, and it is difficult to give a precise definition, but generally speaking:

Example Summing the integers 1 to 10 in Java: total = 0; for (i = 1; i  10; ++i) total = total+i; The computation method is variable assignment. 4

Example Summing the integers 1 to 10 in Haskell: sum [1..10] The computation method is function application. 5

Why is it Useful? Again, there are many possible answers to this question, but generally speaking: The abstract nature of functional programming leads to considerably simpler programs; It also supports a number of powerful new ways to structure and reason about programs.

What is Hugs? An interpreter for Haskell, and the most widely used implementation of the language; An interactive system, which is well-suited for teaching and prototyping purposes; Hugs is freely available from:

The Standard Prelude When Hugs is started it first loads the library file Prelude.hs, and then repeatedly prompts the user for an expression to be evaluated. For example: > 2+3*4 14 > (2+3)*4 20

> length [1,2,3,4] 4 > product [1,2,3,4] 24 > take 3 [1,2,3,4,5] [1,2,3] The standard prelude also provides many useful functions that operate on lists. For example:

Function Application In mathematics, function application is denoted using parentheses, and multiplication is often denoted using juxtaposition or space. f(a,b) + c d Apply the function f to a and b, and add the result to the product of c and d.

In Haskell, function application is denoted using space, and multiplication is denoted using *. f a b + c*d As previously, but in Haskell syntax.

Functional Programming Review Functional operations do not modify data structures: –they always create new ones Original data still exists in unmodified form Data flows are implicit in program design Order of operations does not matter

Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter They do not modify l

Functional Updates Do Not Modify Structures fun append(x, lst) = let lst' = reverse lst in reverse ( x :: lst' ) The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item. But it never modifies lst!

Functions Can Be Used As Arguments fun DoDouble(f, x) = f (f x) It does not matter what f does to its argument; DoDouble() will do it twice. A function is called higher-order if it takes a function as an argument or returns a function as a result

Map map f lst: (’a->’b) -> (’a list) -> (’b list) Creates a new list by applying f to each element of the input list; returns output in order.

Fold fold f x 0 lst: ('a*'b->'b)->'b->('a list)->'b Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

fold left vs. fold right Order of list elements can be significant Fold left moves left-to-right across the list Fold right moves from right-to-left SML Implementation: fun foldl f a [] = a | foldl f a (x::xs) = foldl f (f(x, a)) xs fun foldr f a [] = a | foldr f a (x::xs) = f(x, (foldr f a xs))

Example fun foo(l: int list) = sum(l) + mul(l) + length(l) How can we implement this?

Example (Solved) fun foo(l: int list) = sum(l) + mul(l) + length(l) fun sum(lst) = foldl (fn (x,a)=>x+a) 0 lst fun mul(lst) = foldl (fn (x,a)=>x*a) 1 lst fun length(lst) = foldl (fn (x,a)=>1+a) 0 lst

map Implementation This implementation moves left-to-right across the list, mapping elements one at a time … But does it need to? fun map f [] = [] | map f (x::xs) = (f x) :: (map f xs)

Implicit Parallelism In map In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of f to elements in list is commutative, we can reorder or parallelize execution This is the “secret” that MapReduce exploits

References esource/haskell/functional.ppthttp://net.pku.edu.cn/~course/cs501/2008/r esource/haskell/functional.ppt esource/haskell/ esource/haskell/

Outline Functional Programming Recap MapReduce Theory & Implementation MapReduce Algorithms

Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Want to parallelize across hundreds/thousands of CPUs … Want to make this easy

MapReduce Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers

Programming Model Borrows from functional programming Users implement interface of two functions: –map (in_key, in_value) -> (out_key, intermediate_value) list –reduce (out_key, intermediate_value list) -> out_value list

map Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more intermediate values along with an output key from the input.

map (in_key, in_value) -> (out_key, intermediate_value) list map

reduce After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more final values for that same output key (in practice, usually only one final value per key)

reduce reduce (out_key, intermediate_value list) -> out_value list

Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase can’t start until map phase is completely finished.

Example: Count word occurrences map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

Example vs. Actual Source Code Example is written in pseudo-code Actual implementation is in C++, using a MapReduce library Bindings for Python and Java exist via interfaces True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.)

Locality Master program divides up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

Fault Tolerance Master detects worker failures –Re-executes completed & in-progress map() tasks –Re-executes in-progress reduce() tasks Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. –Effect: Can work around bugs in third-party libraries!

Optimizations No reduce can start until map is complete: –A single slow disk controller can rate-limit the whole process Master redundantly executes “slow- moving” map tasks; uses results of first copy to finish Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?

Optimizations “Combiner” functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth Under what conditions is it sound to use a combiner?

Performance Evaluation 1800 Machines –Gigabit Ethernet Switches –Two level tree hierarchy, Gbps at root –4GB RAM, 2Ghz dual Intel processors –Two 160GB drives Grep: 10^ byte records (1 TB) –Search for relatively rare 3-character sequence Sort: 10^ byte records (1 TB)

Grep Data Transfer Rate

Sort: Normal Execution

Sort: No Backup Tasks

MapReduce Conclusions MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google Functional programming paradigm can be applied to large-scale applications Focus on problem, let library deal with messy details

References [Dean and Ghemawat,2004] J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters," presented at OSDI'04: Sixth Symposium on Operating System Design and Implementation, San Francisco, CA, educe_in_a_week/mapreduce-osdi04.pdf"MapReduce: Simplified Data Processing on Large Clusters," educe_in_a_week/mapreduce-osdi04.pdf

Outline Functional Programming Recap MapReduce Theory & Implementation MapReduce Algorithms

Algorithms for MapReduce Sorting Searching Indexing Classification TF-IDF Breadth-First Search / SSSP PageRank Clustering

MapReduce Jobs Tend to be very short, code-wise –IdentityReducer is very common “Utility” jobs can be composed Represent a data flow, more so than a procedure

Sort: Inputs A set of files, one value per line. Mapper key is file name, line number Mapper value is the contents of the line

Sort Algorithm Takes advantage of reducer properties: (key, value) pairs are processed in order by key; reducers are themselves ordered Mapper: Identity function for value (k, v)  (v, _) Reducer: Identity function (k’, _) -> (k’, “”)

Sort: The Trick (key, value) pairs from mappers are sent to a particular reducer based on hash(key) Must pick the hash function for your data such that k 1 hash(k 1 ) < hash(k 2 )

Final Thoughts on Sort Used as a test of Hadoop’s raw speed Essentially “IO drag race” Highlights utility of GFS

Search: Inputs A set of files containing lines of text A search pattern to find Mapper key is file name, line number Mapper value is the contents of the line Search pattern sent as special parameter

Search Algorithm Mapper: –Given (filename, some text) and “pattern”, if “text” matches “pattern” output (filename, _) Reducer: –Identity function

Search: An Optimization Once a file is found to be interesting, we only need to mark it that way once Use Combiner function to fold redundant (filename, _) pairs into a single one –Reduces network I/O

Indexing: Inputs A set of files containing lines of text Mapper key is file name, line number Mapper value is the contents of the line

Inverted Index Algorithm Mapper: For each word in (file, words), map to (word, file) Reducer: Identity function

Inverted Index: Data flow

An Aside: Word Count Word count was described in codelab I Mapper for Word Count is (word, 1) for each word in input line –Strikingly similar to inverted index –Common theme: reuse/modify existing mappers

Homework Lab2 –Inverted Index HW Reading2 –J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters," presented at OSDI'04: Sixth Symposium on Operating System Design and Implementation, San Francisco, CA, 2004."MapReduce: Simplified Data Processing on Large Clusters,"