Jeffrey D. Ullman Stanford University.  A real story from CS341 data-mining project class.  Students involved did a wonderful job, got an “A.”  But.

Slides:



Advertisements
Similar presentations
Noise, Information Theory, and Entropy (cont.) CS414 – Spring 2007 By Karrie Karahalios, Roger Cheng, Brian Bailey.
Advertisements

1 EE5900 Advanced Embedded System For Smart Infrastructure Advanced Theory.
Applied Algorithmics - week7
1 Chapter 2 The Digital World. 2 Digital Data Representation.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Recursive Definitions and Structural Induction
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
Mining Data Streams.
Bahman Bahmani  Fundamental Tradeoffs  Drug Interaction Example [Adapted from Ullman’s slides, 2012]  Technique I: Grouping 
Index tuning Hash Index. overview Introduction Hash-based indexes are best for equality selections. –Can efficiently support index nested joins –Cannot.
Assignment of Different-Sized Inputs in MapReduce Shantanu Sharma 2 joint work with Foto N. Afrati 1, Shlomi Dolev 2, Ephraim Korach 2, and Jeffrey D.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Branch & Bound Algorithms
Peer-to-Peer Distributed Search. Peer-to-Peer Networks A pure peer-to-peer network is a collection of nodes or peers that: 1.Are autonomous: participants.
Discrete Structure Li Tak Sing( 李德成 ) Lectures
The number of edge-disjoint transitive triples in a tournament.
Bounds for Overlapping Interval Join on MapReduce Foto N. Afrati 1, Shlomi Dolev 2, Shantanu Sharma 2, and Jeffrey D. Ullman 3 1 National Technical University.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Disk Access Model. Using Secondary Storage Effectively In most studies of algorithms, one assumes the “RAM model”: –Data is in main memory, –Access to.
CPSC 668Set 4: Asynchronous Lower Bound for LE in Rings1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Hash Table indexing and Secondary Storage Hashing.
Introduction to Analysis of Algorithms
CS5371 Theory of Computation Lecture 5: Automata Theory III (Non-regular Language, Pumping Lemma, Regular Expression)
CS 104 Introduction to Computer Science and Graphics Problems
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
Jeffrey D. Ullman Stanford University. 2 Formal Definition Implementation Fault-Tolerance Example: Join.
1 CSE 326: Data Structures: Sorting Lecture 17: Wednesday, Feb 19, 2003.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Jeffrey D. Ullman Stanford University.  Mining of Massive Datasets, J. Leskovec, A. Rajaraman, J. D. Ullman.  Available for free download at i.stanford.edu/~ullman/mmds.html.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Meta-MapReduce A Technique for Reducing Communication in MapReduce Computations Foto N. Afrati 1, Shlomi Dolev 2, Shantanu Sharma 2, and Jeffrey D. Ullman.
Lecture 2 Computational Complexity
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Security in Computing Chapter 12, Cryptography Explained Part 7 Summary created by Kirk Scott 1.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
IT253: Computer Organization
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Foto Afrati — National Technical University of Athens Anish Das Sarma — Google Research Semih Salihoglu — Stanford University Jeff Ullman — Stanford University.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Alternative Wide Block Encryption For Discussion Only.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
NP-Complete problems.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
CS 103 Discrete Structures Lecture 13 Induction and Recursion (1)
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
Closure Properties Lemma: Let A 1 and A 2 be two CF languages, then the union A 1  A 2 is context free as well. Proof: Assume that the two grammars are.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
COMPSCI 102 Discrete Mathematics for Computer Science.
Graph Algorithms Maximum Flow - Best algorithms [Adapted from R.Solis-Oba]
MapReduce and the New Software Stack. Outline  Algorithm Using MapReduce  Matrix-Vector Multiplication  Matrix-Vector Multiplication by MapReduce 
CS216: Program and Data Representation University of Virginia Computer Science Spring 2006 David Evans Lecture 8: Crash Course in Computational Complexity.
CS6045: Advanced Algorithms Sorting Algorithms. Sorting So Far Insertion sort: –Easy to code –Fast on small inputs (less than ~50 elements) –Fast on nearly-sorted.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
MA/CSSE 474 Theory of Computation How many regular/non-regular languages are there? Closure properties of Regular Languages (if there is time) Pumping.
Universal Turing Machine
Assignment Problems of Different- Sized Inputs in MapReduce Foto N. Afrati 1, Shlomi Dolev 2, Ephraim Korach 2, Shantanu Sharma 2, and Jeffrey D. Ullman.
Mining Data Streams (Part 1)
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Theory of MapReduce Algorithms
Upper and Lower Bounds on the cost of a Map-Reduce Computation
Assignment Problems of Different-Sized Inputs in MapReduce
Theory of MapReduce Algorithms
Objective of This Course
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Clustering.
Presentation transcript:

Jeffrey D. Ullman Stanford University

 A real story from CS341 data-mining project class.  Students involved did a wonderful job, got an “A.”  But their first attempt at a MapReduce algorithm caused them problems and led to the development of an interesting theory. 3

 Data consisted of records for 3000 drugs.  List of patients taking, dates, diagnoses.  About 1M of data per drug.  Problem was to find drug interactions.  Example: two drugs that when taken together increase the risk of heart attack.  Must examine each pair of drugs and compare their data. 4

 The first attempt used the following plan:  Key = set of two drugs {i, j}.  Value = the record for one of these drugs.  Given drug i and its record R i, the mapper generates all key-value pairs ({i, j}, R i ), where j is any other drug besides i.  Each reducer receives its key and a list of the two records for that pair: ({i, j}, [R i, R j ]). 5

6 Mapper for drug 2 Mapper for drug 1 Mapper for drug 3 Drug 1 data {1, 2} Reducer for {1,2} Reducer for {2,3} Reducer for {1,3} Drug 1 data {1, 3} Drug 2 data {1, 2} Drug 2 data {2, 3} Drug 3 data {1, 3} Drug 3 data {2, 3}

7 Mapper for drug 2 Mapper for drug 1 Mapper for drug 3 Drug 1 data {1, 2} Reducer for {1,2} Reducer for {2,3} Reducer for {1,3} Drug 1 data {1, 3} Drug 2 data {1, 2} Drug 2 data {2, 3} Drug 3 data {1, 3} Drug 3 data {2, 3}

8 Drug 1 data {1, 2} Reducer for {1,2} Reducer for {2,3} Reducer for {1,3} Drug 1 data Drug 2 data {2, 3} Drug 3 data {1, 3} Drug 3 data

 3000 drugs  times 2999 key-value pairs per drug  times 1,000,000 bytes per key-value pair  = 9 terabytes communicated over a 1Gb Ethernet  = 90,000 seconds of network use. 9

 The team grouped the drugs into 30 groups of 100 drugs each.  Say G 1 = drugs 1-100, G 2 = drugs ,…, G 30 = drugs  Let g(i) = the number of the group into which drug i goes. 10

 A key is a set of two group numbers.  The mapper for drug i produces 29 key-value pairs.  Each key is the set containing g(i) and one of the other group numbers.  The value is a pair consisting of the drug number i and the megabyte-long record for drug i. 11

 The reducer for pair of groups {m, n} gets that key and a list of 200 drug records – the drugs belonging to groups m and n.  Its job is to compare each record from group m with each record from group n.  Special case: also compare records in group n with each other, if m = n+1 or if n = 30 and m = 1.  Notice each pair of records is compared at exactly one reducer, so the total computation is not increased. 12

 The big difference is in the communication requirement.  Now, each of 3000 drugs’ 1MB records is replicated 29 times.  Communication cost = 87GB, vs. 9TB. 13

1. A set of inputs.  Example: the drug records. 2. A set of outputs.  Example: one output for each pair of drugs, telling whether a statistically significant interaction was detected. 3. A many-many relationship between each output and the inputs needed to compute it.  Example: The output for the pair of drugs {i, j} is related to inputs i and j. 15

16 Drug 1 Drug 2 Drug 3 Drug 4 Output 1-2 Output 1-3 Output 2-4 Output 1-4 Output 2-3 Output 3-4

17  = i j j i

 Reducer size, denoted q, is the maximum number of inputs that a given reducer can have.  I.e., the length of the value list.  Limit might be based on how many inputs can be handled in main memory.  Or: make q low to force lots of parallelism. 18

 The average number of key-value pairs created by each mapper is the replication rate.  Denoted r.  Represents the communication cost per input. 19

 Suppose we use g groups and d drugs.  A reducer needs two groups, so q = 2d/g.  Each of the d inputs is sent to g-1 reducers, or approximately r = g.  Replace g by r in q = 2d/g to get r = 2d/q. 20 Tradeoff! The bigger the reducers, the less communication.

 What we did gives an upper bound on r as a function of q.  A solid investigation of MapReduce algorithms for a problem includes lower bounds.  Proofs that you cannot have lower r for a given q. 21

 A mapping schema for a problem and a reducer size q is an assignment of inputs to sets of reducers, with two conditions: 1.No reducer is assigned more than q inputs. 2.For every output, there is some reducer that receives all of the inputs associated with that output.  Say the reducer covers the output.  If some output is not covered, we can’t compute that output. 22

 Every MapReduce algorithm has a mapping schema.  The requirement that there be a mapping schema is what distinguishes MapReduce algorithms from general parallel algorithms. 23

 d drugs, reducer size q.  Each drug has to meet each of the d-1 other drugs at some reducer.  If a drug is sent to a reducer, then at most q-1 other drugs are there.  Thus, each drug is sent to at least  (d-1)/(q-1)  reducers, and r >  (d-1)/(q-1) .  Or approximately r >  d/q .  Half the r from the algorithm we described.  Better algorithm gives r = d/q + 1, so lower bound is actually tight. 24

 Given a set of bit strings of length b, find all those that differ in exactly one bit.  Example: For b=2, the inputs are 00, 01, 10, 11, and the outputs are (00,01), (00,10), (01,11), (10,11).  Theorem: r > b/log 2 q.  (Part of) the proof later. 26

 If all bit strings of length b are in the input, then we already know the answer, and running MapReduce is a waste of time.  A more realistic scenario is that we are doing a similarity search, where some of the possible bit strings are present and others not.  Example: Find viewers who like the same set of movies except for one.  We can adjust q to be the expected number of inputs at a reducer, rather than the maximum number. 27

 We can use one reducer for every output.  Each input is sent to b reducers (so r = b).  Each reducer outputs its pair if both its inputs are present, otherwise, nothing.  Subtle point: if neither input for a reducer is present, then the reducer doesn’t really exist. 28

 Alternatively, we can send all inputs to one reducer.  No replication (i.e., r = 1).  The lone reducer looks at all pairs of inputs that it receives and outputs pairs at distance 1. 29

 Assume b is even.  Two reducers for each string of length b/2.  Call them the left and right reducers for that string.  String w = xy, where |x| = |y| = b/2, goes to the left reducer for x and the right reducer for y.  If w and z differ in exactly one bit, then they will both be sent to the same left reducer (if they disagree in the right half) or to the same right reducer (if they disagree in the left half).  Thus, r = 2; q = 2 b/2. 30

 Lemma: A reducer of size q cannot cover more than (q/2)log 2 q outputs.  Induction on b; proof omitted.  (b/2)2 b outputs must be covered.  There are at least p = (b/2)2 b /((q/2)log 2 q) = (b/q)2 b /log 2 q reducers.  Sum of inputs over all reducers > pq = b2 b /log 2 q.  Replication rate r = pq/2 b = b/log 2 q.  Omits possibility that smaller reducers help. 31

Algorithms Matching Lower Bound q = reducer size b b/2 2b2b All inputs to one reducer One reducer for each output Splitting Generalized Splitting 32 r = replication rate r = b/log 2 q

 Represent problems by mapping schemas  Get upper bounds on number of outputs covered by one reducer, as a function of reducer size.  Turn these into lower bounds on replication rate as a function of reducer size.  For All-Pairs (“drug interactions”) problem and HD1 problem: exact match between upper and lower bounds.  Other problems for which a match is known: matrix multiplication, computing marginals. 33

 Get matching upper and lower bounds for the Hamming-distance problem for distances greater than 1.  Ugly fact: For HD=1, you cannot have a large reducer with all pairs at distance 1; for HD=2, it is possible.  Consider all inputs of weight 1 and length b. 34

1. Give an algorithm that takes an input-output mapping and a reducer size q, and gives a mapping schema with the smallest replication rate. 2. Is the problem even tractable? 3. A recent extension by Afrati, Dolev, Korach, Sharma, and U. lets inputs have weights, and the reducer size limits the sum of the weights of the inputs received.  What can be extended to this model? 35