Download presentation

Presentation is loading. Please wait.

Published byKurtis Leaks Modified about 1 year ago

1
A Model of Computation for MapReduce Karloff, Suri and Vassilvitskii (SODA ’ 10) Presented by Ning Xie

2
Why MapReduce Tera- and petabytes data set (search engines, internet traffic, bioinformatics, etc) Tera- and petabytes data set (search engines, internet traffic, bioinformatics, etc) Need parallel computing Need parallel computing Requirement: easy to program, reliable, distributed Requirement: easy to program, reliable, distributed

3
What is MapReduce A new framework for parallel computing originally developed at Google (before ’ 04) A new framework for parallel computing originally developed at Google (before ’ 04) Widely adopted and became standard for large scale data analysis Widely adopted and became standard for large scale data analysis Hadoop (open source version) is being used by Yahoo, Facebook, Adobe, IBM, Amazon, and many institutions in academia Hadoop (open source version) is being used by Yahoo, Facebook, Adobe, IBM, Amazon, and many institutions in academia

4
What is MapReduce (cont.) Three-stage operations: Three-stage operations: Map-stage: mapper operates on a single pair, outputs any number of new pairs Map-stage: mapper operates on a single pair, outputs any number of new pairs Shuffle-stage: all values that are associated to an individual key are sent to a single machine (done by the system)Shuffle-stage: all values that are associated to an individual key are sent to a single machine (done by the system) Reduce-stage: reducer operates on the all the values and outputs a multiset of Reduce-stage: reducer operates on the all the values and outputs a multiset of

5
What is MapReduce (cont.) Map operation is stateless (parallel) Map operation is stateless (parallel) Shuffle stage is done automatically by the underlying system Shuffle stage is done automatically by the underlying system Reduce stage can only start when all Map operations are done (interleaving between sequential and parallel) Reduce stage can only start when all Map operations are done (interleaving between sequential and parallel)

6
An example: k th frequency moment of a large data set Input: x 2 Σ n, Σ is a finite set of symbols Input: x 2 Σ n, Σ is a finite set of symbols Let f( ¾ ) be the frequency of symbol ¾ Let f( ¾ ) be the frequency of symbol ¾ note: ¾ f( ¾ )=n note: ¾ f( ¾ )=n Want to compute ¾ f k ( ¾ ) Want to compute ¾ f k ( ¾ )

7
An example (cont.) Input to each mapper: Input to each mapper: M 1 ( )= (i is the index)M 1 ( )= (i is the index) Input to each reducer: Input to each reducer: R 1 ( )= R 1 ( )= Each mapper: M 2 ( )= Each mapper: M 2 ( )= A single reducer: R 2 ( = A single reducer: R 2 ( =

8
Formal Definitons A MapReduce program consists of a sequence of mappers and reducers A MapReduce program consists of a sequence of mappers and reducers The input is U 0, a multiset of The input is U 0, a multiset of

9
Formal Definitons Execution of the program Execution of the program For r=1,2, …,l 1. feed each in U r-1 to mapper M r Let the output be U ’ r 1. feed each in U r-1 to mapper M r Let the output be U ’ r 2. for each key k, construct the multiset V k,r s.t. 2 U r-1 2. for each key k, construct the multiset V k,r s.t. 2 U r-1 3. for each k, feed k and some perm. of V k,r to a separate instance of R r. Let U r be the multiset of generated by R r 3. for each k, feed k and some perm. of V k,r to a separate instance of R r. Let U r be the multiset of generated by R r

10
The MapReduce Class ( MRC ) On input { } of size n On input { } of size n Memory: each mapper/reducer uses O(n 1- ² ) spaceMemory: each mapper/reducer uses O(n 1- ² ) space Machines: There are £ (n 1- ² ) machines availableMachines: There are £ (n 1- ² ) machines available Time: each machine runs in time polynomial in n, not polynomial in the length of the input they receiveTime: each machine runs in time polynomial in n, not polynomial in the length of the input they receive Randomized algorithms for map and reduceRandomized algorithms for map and reduce Rounds: Shuffle is expensiveRounds: Shuffle is expensive MRC i : num. of rounds=O(log i n) MRC i : num. of rounds=O(log i n) DMRC : the deterministic variant DMRC : the deterministic variant

11
Comparing MRC with PRAM Most relevant classical computation model is the PRAM ( Parallel Random Access Machine ) model Most relevant classical computation model is the PRAM ( Parallel Random Access Machine ) model The corresponding class is NC The corresponding class is NC Easy relation: MRC µ P Easy relation: MRC µ P Lemma: If NC P, then MRC * NC Lemma: If NC P, then MRC * NC Open question: show that DMRC P Open question: show that DMRC P

12
Comparing with PRAM (cont.) Simulation lemma: Simulation lemma: Any CREW (concurrent read exclusive write) PRAM algorithm using O(n 2-2 ² ) total memory and O(n 2-2 ² ) processors and runs in time t(n) can be simulated by an algorithm in DMRC which runs in O(t(n)) rounds Any CREW (concurrent read exclusive write) PRAM algorithm using O(n 2-2 ² ) total memory and O(n 2-2 ² ) processors and runs in time t(n) can be simulated by an algorithm in DMRC which runs in O(t(n)) rounds

13
Example: Finding an MST Problem: find the minimum spanning tree of a dense graph Problem: find the minimum spanning tree of a dense graph The algorithm The algorithm Randomly partition the vertices into k partsRandomly partition the vertices into k parts For each pair of vertex sets, find the MST of the bipartite subgraph induce by these two setsFor each pair of vertex sets, find the MST of the bipartite subgraph induce by these two sets Take the union of all the edges in the MST of each pair, call the graph HTake the union of all the edges in the MST of each pair, call the graph H Compute an MST of HCompute an MST of H

14
Finding an MST (cont.) The algorithm is easy to parallelize The algorithm is easy to parallelize The MST of each subgraph can be computed in parallelThe MST of each subgraph can be computed in parallel Why it works? Why it works? Theorem: the MST tree of H is an MST of GTheorem: the MST tree of H is an MST of G Proof: we did not discard any relevant edge when sparsify the input graph GProof: we did not discard any relevant edge when sparsify the input graph G

15
Finding an MST (cont.) Why the algorithm in MRC? Why the algorithm in MRC? Let N=|V| and m=|E|=N 1+cLet N=|V| and m=|E|=N 1+c So input size n satisfies N=n 1/1+cSo input size n satisfies N=n 1/1+c Pick k=N c/2Pick k=N c/2 Lemma: with high probability, the size of each bipartite subgraph has size N 1+c/2Lemma: with high probability, the size of each bipartite subgraph has size N 1+c/2 so the input to any reducer is n 1- ²so the input to any reducer is n 1- ² The size of H is also n 1- ²The size of H is also n 1- ²

16
Functions Lemma A very useful building block for designing MapReduce algorithms A very useful building block for designing MapReduce algorithms Definition [MRC-parallelizable function]: Let S be a finite set. We say a function f on S is MRC-parallelizable if there are functions g and h so that the following hold: Definition [MRC-parallelizable function]: Let S be a finite set. We say a function f on S is MRC-parallelizable if there are functions g and h so that the following hold: For any partition of S, S = T 1 [ T 2 [ … [ T kFor any partition of S, S = T 1 [ T 2 [ … [ T k f can be written as: f(S) =h(g(T 1 ), g(T 2 ), … g(T k )).f can be written as: f(S) =h(g(T 1 ), g(T 2 ), … g(T k )). g and h each can be represented in O(logn) bits.g and h each can be represented in O(logn) bits. g and h can be computed in time polynomial in |S| and all possible outputs of g can be expressed in O(logn) bits.g and h can be computed in time polynomial in |S| and all possible outputs of g can be expressed in O(logn) bits.

17
Functions Lemma (cont.) Lemma (Functions Lemma) Lemma (Functions Lemma) Let U be a universe of size n and let S = {S 1, …, S k } be a collection of subsets of U, where k · n 2-3 ² and i=1 k |S i | · n 2-2 ². Let F = {f 1, …, f k } be a collection of MRC-parallelizable functions. Then the output f 1 (S 1 ), …, f k (S k ) can be computed using O(n 1- ² ) reducers each with O(n 1- ² ) space. Let U be a universe of size n and let S = {S 1, …, S k } be a collection of subsets of U, where k · n 2-3 ² and i=1 k |S i | · n 2-2 ². Let F = {f 1, …, f k } be a collection of MRC-parallelizable functions. Then the output f 1 (S 1 ), …, f k (S k ) can be computed using O(n 1- ² ) reducers each with O(n 1- ² ) space.

18
Functions Lemma (cont.) The power of the lemma The power of the lemma Algorithm designer may focus only on the structure of the subproblem and the inputAlgorithm designer may focus only on the structure of the subproblem and the input Distribution the input across reducer is handled by the lemma (existence theorem)Distribution the input across reducer is handled by the lemma (existence theorem) Proof of the lemma is not easy Proof of the lemma is not easy Use universal hashingUse universal hashing Use Chernoff bound, etcUse Chernoff bound, etc

19
Application of the Functions Lemma: s-t connectivity Problem: given a graph G and two nodes, are they connected in G? Problem: given a graph G and two nodes, are they connected in G? Dense graphs: easy, powering adjacency matrix Dense graphs: easy, powering adjacency matrix Sparse graphs? Sparse graphs?

20
A logn-round MapReduce algorithm for s-t connectivity Initially every node is active Initially every node is active For i=1,2, …, O(logn) do For i=1,2, …, O(logn) do Each active node becomes a leader with probability ½Each active node becomes a leader with probability ½ For each non-leader active node u, find a node v in the neighbor of u ’ s current connected componentFor each non-leader active node u, find a node v in the neighbor of u ’ s current connected component If the connected component of v is non-empty, then u become passive and re-label each node in u ’ s connected component with v ’ s labelIf the connected component of v is non-empty, then u become passive and re-label each node in u ’ s connected component with v ’ s label Output TRUE if s and t have the same label, FALSE otherwise Output TRUE if s and t have the same label, FALSE otherwise

21
Conclusions A rigorous model for MapReduce A rigorous model for MapReduce Very loose requirements for the hardware requirements Very loose requirements for the hardware requirements Call for more research in this direction Call for more research in this direction

22
THANK YOU!

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google