MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides (http://www.umiacs.umd.edu/~jimmylin/cloud-2010-Spring/index.html) (licensed.

Slides:



Advertisements
Similar presentations
Overview of this week Debugging tips for ML algorithms
Advertisements

MapReduce.
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
大规模数据处理 / 云计算 Lecture 4 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 4/24/2011 This work is licensed under a Creative.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, February 21, 2013 Session 5: Graph Processing This work is licensed.
大规模数据处理 / 云计算 Lecture 6 – Graph Algorithm 彭波 北京大学信息科学技术学院 4/26/2011 This work is licensed under a Creative Commons.
Thanks to Jimmy Lin slides
Cloud Computing Lecture #3 More MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, September 10, 2008 This work is licensed under a Creative.
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
Hadoop: Nuts and Bolts Data-Intensive Information Processing Applications ― Session #2 Jimmy Lin University of Maryland Tuesday, February 2, 2010 This.
Jimmy Lin The iSchool University of Maryland Wednesday, April 15, 2009
Cloud Computing Lecture #5 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, October 1, 2008 This work is licensed.
MapReduce Algorithms CSE 490H. Algorithms for MapReduce Sorting Searching TF-IDF BFS PageRank More advanced algorithms.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland Tuesday, June 29, 2010 This work is licensed.
Data-Intensive Text Processing with MapReduce Jimmy Lin The iSchool University of Maryland Sunday, May 31, 2009 This work is licensed under a Creative.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Cloud Computing Lecture #4 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, February 6, 2008 This work is licensed.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Big Data Analytics with R and Hadoop
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 This work is licensed under a Creative Commons.
HADOOP ADMIN: Session -2
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland MLG, January, 2014 Jaehwan Lee.
CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
MapReduce Programming. MapReduce: Recap Programmers must specify: map (k, v) → list( ) reduce (k’, list(v’)) → –All values with the same key are reduced.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Introduction to Hadoop and HDFS
大规模数据处理 / 云计算 Lecture 5 – Hadoop Runtime 彭波 北京大学信息科学技术学院 7/23/2013 This work is licensed under a Creative Commons.
Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters Hung-chih Yang(Yahoo!), Ali Dasdan(Yahoo!), Ruey-Lung Hsiao(UCLA), D. Stott Parker(UCLA)
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Graph Algorithms. Graph Algorithms: Topics  Introduction to graph algorithms and graph represent ations  Single Source Shortest Path (SSSP) problem.
CS 347Notes101 CS 347 Parallel and Distributed Data Processing Distributed Information Retrieval Hector Garcia-Molina Zoltan Gyongyi.
大规模数据处理 / 云计算 Lecture 5 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 7/19/2011 This work is licensed under a Creative.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
Distributed Computing Seminar Lecture 5: Graph Algorithms & PageRank Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007 Except.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
大规模数据处理 / 云计算 Lecture 3 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/16/2013 This work is licensed under a Creative.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (1/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
MapReduce Joins Shalish.V.J. A Refresher on Joins A join is an operation that combines records from two or more data sets based on a field or set of fields,
Big Data Infrastructure Week 5: Analyzing Graphs (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United.
Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Big Data Infrastructure Week 3: From MapReduce to Spark (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, January 31, 2013 Session 2: Hadoop Nuts and Bolts This work is licensed.
Tallahassee, Florida, 2016 COP5725 Advanced Database Systems MapReduce Spring 2016.
大规模数据处理 / 云计算 05 – Graph Algorithm 闫宏飞 北京大学信息科学技术学院 7/22/2014 Jimmy Lin University of Maryland SEWMGroup This work.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Jimmy Lin and Michael Schatz Design Patterns for Efficient Graph Algorithms in MapReduce Michele Iovino Facoltà di Ingegneria dell’Informazione, Informatica.
Image taken from: slideshare
MapReduce and Data Management
Big Data Infrastructure
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
湖南大学-信息科学与工程学院-计算机与科学系
MapReduce and Data Management
Cloud Computing Lecture #4 Graph Algorithms with MapReduce
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Data processing with Hadoop
Graph Algorithms Adapted from UMD Jimmy Lin’s slides, which is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
MAPREDUCE TYPES, FORMATS AND FEATURES
COS 518: Distributed Systems Lecture 11 Mike Freedman
Map Reduce, Types, Formats and Features
Presentation transcript:

MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides ( (licensed under Creation Commons Attribution 3.0 License) I used also some ideas from chapter 2 from the book by Anand Rajaraman and Jeff Ullman: "Mining of Massive Datasets“ (

MapReduce Algorithm Design

MapReduce: Recap Programmers must specify: map (k, v) → list( ) reduce (k’, list(v’)) → –All values with the same key are reduced together Optionally, also: partition (k’, number of partitions) → partition for k’ –Often a simple hash of the key, e.g., hash(k’) mod n –Divides up key space for parallel reduce operations combine (k’, v’) → * –Mini-reducers that run in memory after the map phase –Used as an optimization to reduce network traffic The execution framework handles everything else…

combine ba12c9ac52bc78 partition map k1k1 k2k2 k3k3 k4k4 k5k5 k6k6 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 ba12 cc36ac52 bc78 Shuffle and Sort: aggregate values by keys reduce a15b27c298 r1r1 s1s1 r2r2 s2s2 r3r3 s3s3

“Everything Else” The execution framework handles everything else… –Scheduling: assigns workers to map and reduce tasks –“Data distribution”: moves processes to data –Synchronization: gathers, sorts, and shuffles intermediate data –Errors and faults: detects worker failures and restarts Limited control over data and execution flow –All algorithms must expressed in m, r, c, p You don’t know: –Where mappers and reducers run –When a mapper or reducer begins or finishes –Which input a particular mapper is processing –Which intermediate key a particular reducer is processing

Tools for Synchronization Cleverly-constructed data structures –Bring partial results together Sort order of intermediate keys –Control order in which reducers process keys Partitioner –Control which reducer processes which keys Preserving state in mappers and reducers –Capture dependencies across multiple keys and values

Basic Hadoop API Mapper –void map(K1 key, V1 value, OutputCollector output, Reporter reporter) –void configure(JobConf job) –void close() throws IOException Reducer/Combiner –void reduce(K2 key, Iterator values, OutputCollector output, Reporter reporter) –void configure(JobConf job) –void close() throws IOException Partitioner –void getPartition(K2 key, V2 value, int numPartitions) *Note: forthcoming API changes…

Data Types in Hadoop WritableDefines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprableDefines a sort order. All keys must be of this type (but not values). IntWritable LongWritable Text … Concrete classes for different data types. SequenceFiles Binary encoded of a sequence of key/value pairs

Scalable Hadoop Algorithms: Themes Avoid object creation –Inherently costly operation –Garbage collection Avoid buffering –Limited heap size –Works for small datasets, but won’t scale!

Hadoop Map Reduce Example See the word count example from Hadoop Tutorial: t/mapred_tutorial.html#Overview

Basic Cluster Components One of each: –Namenode (NN) –Jobtracker (JT) Set of each per slave machine: –Tasktracker (TT) –Datanode (DN)

Putting everything together… datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node namenode namenode daemon job submission node jobtracker

Anatomy of a Job MapReduce program in Hadoop = Hadoop job –Jobs are divided into map and reduce tasks –An instance of running a task is called a task attempt –Multiple jobs can be composed into a workflow Job submission process –Client (i.e., driver program) creates a job, configures it, and submits it to job tracker –JobClient computes input splits (on client end) –Job data (jar, configuration XML) are sent to JobTracker –JobTracker puts job data in shared location, enqueues tasks –TaskTrackers poll for tasks –Off to the races…

InputSplit Source: redrawn from a slide by Cloduera, cc-licensed InputSplit Input File InputSplit RecordReader Mapper Intermediates Mapper Intermediates Mapper Intermediates Mapper Intermediates Mapper Intermediates InputFormat

Source: redrawn from a slide by Cloduera, cc-licensed Mapper Partitioner Intermediates Reducer Reduce Intermediates (combiners omitted here)

Source: redrawn from a slide by Cloduera, cc-licensed Reducer Reduce Output File RecordWriter OutputFormat Output File RecordWriter Output File RecordWriter

Input and Output InputFormat: –TextInputFormat –KeyValueTextInputFormat –SequenceFileInputFormat –… OutputFormat: –TextOutputFormat –SequenceFileOutputFormat –…

Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce! Map side –Map outputs are buffered in memory in a circular buffer –When buffer reaches threshold, contents are “spilled” to disk –Spills merged in a single, partitioned file (sorted within each partition): combiner runs here Reduce side –First, map outputs are copied over to reducer machine –“Sort” is a multi-pass merge of map outputs (happens in memory and on disk): combiner runs here –Final merge pass goes directly into reducer

Shuffle and Sort Mapper Reducer other mappers other reducers circular buffer (in memory) spills (on disk) merged spills (on disk) intermediate files (on disk) Combiner

Hadoop Workflow Hadoop Cluster You 1. Load data into HDFS 2. Develop code locally 3. Submit MapReduce job 3a. Go back to Step 2 4. Retrieve data from HDFS

On Amazon: With EC2 You 1. Load data into HDFS 2. Develop code locally 3. Submit MapReduce job 3a. Go back to Step 2 4. Retrieve data from HDFS 0. Allocate Hadoop cluster EC2 Your Hadoop Cluster 5. Clean up! Uh oh. Where did the data go?

On Amazon: EC2 and S3 Your Hadoop Cluster S3 (Persistent Store) EC2 (The Cloud) Copy from S3 to HDFS Copy from HFDS to S3

Graph Algorithms in MapReduce G = (V,E), where –V represents the set of vertices (nodes) –E represents the set of edges (links) –Both vertices and edges may contain additional information

Graphs and MapReduce Graph algorithms typically involve: –Performing computations at each node: based on node features, edge features, and local link structure –Propagating computations: “traversing” the graph Key questions: –How do you represent graph data in MapReduce? –How do you traverse a graph in MapReduce?

Representing Graphs G = (V, E) Two common representations –Adjacency matrix –Adjacency list

Adjacency Matrices Represent a graph as an n x n square matrix M –n = |V| –M ij = 1 means a link from node i to j

Adjacency Matrices: Critique Advantages: –Amenable to mathematical manipulation –Iteration over rows and columns corresponds to computations on outlinks and inlinks Disadvantages: –Lots of zeros for sparse matrices –Lots of wasted space

Adjacency Lists Take adjacency matrices… and throw away all the zeros 1: 2, 4 2: 1, 3, 4 3: 1 4: 1,

Adjacency Lists: Critique Advantages: –Much more compact representation –Easy to compute over outlinks Disadvantages: –Much more difficult to compute over inlinks

Finding the Shortest Path Consider simple case of equal edge weights Solution to the problem can be defined inductively Here’s the intuition: –Define: b is reachable from a if b is on adjacency list of a –DISTANCETO(s) = 0 –For all nodes p reachable from s, DISTANCETO(p) = 1 –For all nodes n reachable from some other set of nodes M, DISTANCETO(n) = 1 + min(DISTANCETO(m), m  M)

Visualizing Parallel BFS n0n0 n0n0 n3n3 n3n3 n2n2 n2n2 n1n1 n1n1 n7n7 n7n7 n6n6 n6n6 n5n5 n5n5 n4n4 n4n4 n9n9 n9n9 n8n8 n8n8

From Intuition to Algorithm Data representation: –Key: node n –Value: d (distance from start), adjacency list (list of nodes reachable from n) –Initialization: for all nodes except for start node, d =  Mapper: –  m  adjacency list: emit (m, d + 1) Sort/Shuffle –Groups distances by reachable nodes Reducer: –Selects minimum distance path for each reachable node –Additional bookkeeping needed to keep track of actual path

Multiple Iterations Needed Each MapReduce iteration advances the “known frontier” by one hop –Subsequent iterations include more and more reachable nodes as frontier expands –Multiple iterations are needed to explore entire graph Preserving graph structure: –Problem: Where did the adjacency list go? –Solution: mapper emits (n, adjacency list) as well

BFS Pseudo-Code

Stopping Criterion How many iterations are needed in parallel BFS (equal edge weight case)? Convince yourself: when a node is first “discovered”, we’ve found the shortest path Now answer the question... –Six degrees of separation?

Graphs and MapReduce Graph algorithms typically involve: –Performing computations at each node: based on node features, edge features, and local link structure –Propagating computations: “traversing” the graph Generic recipe: –Represent graphs as adjacency lists –Perform local computations in mapper –Pass along partial results via outlinks, keyed by destination node –Perform aggregation in reducer on inlinks to a node –Iterate until convergence: controlled by external “driver” –Don’t forget to pass the graph structure between iterations

Random Walks Over the Web Random surfer model: –User starts at a random Web page –User randomly clicks on links, surfing from page to page PageRank –Characterizes the amount of time spent on any given page –Mathematically, a probability distribution over pages PageRank captures notions of page importance –One of thousands of features used in web search –Note: query-independent

Given page x with inlinks t 1 …t n, where –C(t) is the out-degree of t –  is probability of random jump –N is the total number of nodes in the graph PageRank: Defined

Computing PageRank Properties of PageRank –Can be computed iteratively –Effects at each iteration are local Sketch of algorithm: –Start with seed PR i values –Each page distributes PR i “credit” to all pages it links to –Each target page adds up “credit” from multiple in-bound links to compute PR i+1 –Iterate until values converge

Simplified PageRank First, tackle the simple case: –No random jump factor –No dangling links Then, factor in these complexities… –Why do we need the random jump? –Where do dangling links come from?

Sample PageRank Iteration (1) n 1 (0.2) n 4 (0.2) n 3 (0.2) n 5 (0.2) n 2 (0.2) n 1 (0.066) n 4 (0.3) n 3 (0.166) n 5 (0.3) n 2 (0.166) Iteration 1

Sample PageRank Iteration (2) n 1 (0.066) n 4 (0.3) n 3 (0.166) n 5 (0.3) n 2 (0.166) n 1 (0.1) n 4 (0.2) n 3 (0.183) n 5 (0.383) n 2 (0.133) Iteration 2

PageRank in MapReduce n 5 [n 1, n 2, n 3 ]n 1 [n 2, n 4 ]n 2 [n 3, n 5 ]n 3 [n 4 ]n 4 [n 5 ] n2n2 n4n4 n3n3 n5n5 n1n1 n2n2 n3n3 n4n4 n5n5 n2n2 n4n4 n3n3 n5n5 n1n1 n2n2 n3n3 n4n4 n5n5 n 5 [n 1, n 2, n 3 ] n 1 [n 2, n 4 ] n 2 [n 3, n 5 ]n 3 [n 4 ]n 4 [n 5 ] Map Reduce

PageRank Pseudo-Code

Complete PageRank Two additional complexities –What is the proper treatment of dangling nodes? –How do we factor in the random jump factor? Solution: –Second pass to redistribute “missing PageRank mass” and account for random jumps –p is PageRank value from before, p' is updated PageRank value –|G| is the number of nodes in the graph –m is the missing PageRank mass

PageRank Convergence Alternative convergence criteria –Iterate until PageRank values don’t change –Iterate until PageRank rankings don’t change –Fixed number of iterations Convergence for web graphs?

Beyond PageRank Link structure is important for web search –PageRank is one of many link-based features: HITS, SALSA, etc. –One of many thousands of features used in ranking… Adversarial nature of web search –Link spamming –Spider traps –Keyword stuffing –…

Efficient Graph Algorithms Sparse vs. dense graphs Graph topologies

Figure from: Newman, M. E. J. (2005) “Power laws, Pareto distributions and Zipf's law.” Contemporary Physics 46:323–351. Power Laws are everywhere!

Local Aggregation Use combiners! –In-mapper combining design pattern also applicable Maximize opportunities for local aggregation –Simple tricks: sorting the dataset in specific ways

Mapreduce and Databases

Relational Databases vs. MapReduce Relational databases: –Multipurpose: analysis and transactions; batch and interactive –Data integrity via ACID transactions –Lots of tools in software ecosystem (for ingesting, reporting, etc.) –Supports SQL (and SQL integration, e.g., JDBC) –Automatic SQL query optimization MapReduce (Hadoop): –Designed for large clusters, fault tolerant –Data is accessed in “native format” –Supports many query languages –Programmers retain control over performance –Open source Source: O’Reilly Blog post by Joseph Hellerstein (11/19/2008)

Database Workloads OLTP (online transaction processing) –Typical applications: e-commerce, banking, airline reservations –User facing: real-time, low latency, highly-concurrent –Tasks: relatively small set of “standard” transactional queries –Data access pattern: random reads, updates, writes (involving relatively small amounts of data) OLAP (online analytical processing) –Typical applications: business intelligence, data mining –Back-end processing: batch workloads, less concurrency –Tasks: complex analytical queries, often ad hoc –Data access pattern: table scans, large amounts of data involved per query

OLTP/OLAP Architecture OLTPOLAP ETL (Extract, Transform, and Load)

OLTP/OLAP Integration OLTP database for user-facing transactions –Retain records of all activity –Periodic ETL (e.g., nightly) Extract-Transform-Load (ETL) –Extract records from source –Transform: clean data, check integrity, aggregate, etc. –Load into OLAP database OLAP database for data warehousing –Business intelligence: reporting, ad hoc queries, data mining, etc. –Feedback to improve OLTP services

OLTP/OLAP/Hadoop Architecture OLTPOLAP ETL (Extract, Transform, and Load) Hadoop Why does this make sense?

ETL Bottleneck Reporting is often a nightly task: –ETL is often slow: why? –What happens if processing 24 hours of data takes longer than 24 hours? Hadoop is perfect: –Most likely, you already have some data warehousing solution –Ingest is limited by speed of HDFS –Scales out with more nodes –Massively parallel –Ability to use any processing tool –Much cheaper than parallel databases –ETL is a batch process anyway!

MapReduce algorithms for processing relational data

Relational Algebra Primitives –Projection (  ) –Selection (  ) –Cartesian product (  ) –Set union (  ) –Set difference (  ) –Rename (  ) Other operations –Join ( ⋈ ) –Group by… aggregation –…

Projection R1R1 R2R2 R3R3 R4R4 R5R5 R1R1 R2R2 R3R3 R4R4 R5R5

Projection in MapReduce Easy! –Map over tuples, emit new tuples with appropriate attributes –Reduce: take tuples that appear many times and emit only one version (duplicate elimination) Tuple t in R: Map(t, t) -> (t’,t’) Reduce (t’, [t’, …,t’]) -> [t’,t’] Basically limited by HDFS streaming speeds –Speed of encoding/decoding tuples becomes important –Relational databases take advantage of compression –Semistructured data? No problem!

Selection R1R1 R2R2 R3R3 R4R4 R5R5 R1R1 R3R3

Selection in MapReduce Easy! –Map over tuples, emit only tuples that meet criteria –No reducers, unless for regrouping or resorting tuples (reducers are the identity function) –Alternatively: perform in reducer, after some other processing But very expensive!!! Has to scan the database –Better approaches?

Union, Set Intersection and Set Difference Similar ideas: each map outputs the tuple pair (t,t). For union, we output it once, for intersection only when in the reduce we have (t, [t,t]) For Set difference?

Set Difference -Map Function: For a tuple t in R, produce key- value pair (t, R), and for a tuple t in S, produce key-value pair (t, S). -Reduce Function: For each key t, do the following. 1. If the associated value list is [R], then produce (t, t). 2. If the associated value list is anything else, which could only be [R, S], [S, R], or [S], produce (t, NULL).

Group by… Aggregation Example: What is the average time spent per URL? In SQL: –SELECT url, AVG(time) FROM visits GROUP BY url In MapReduce: –Map over tuples, emit time, keyed by url –Framework automatically groups values by keys –Compute average in reducer –Optimize with combiners

Relational Joins R1R1 R2R2 R3R3 R4R4 S1S1 S2S2 S3S3 S4S4 R1R1 S2S2 R2R2 S4S4 R3R3 S1S1 R4R4 S3S3

Join Algorithms in MapReduce Reduce-side join Map-side join In-memory join –Striped variant –Memcached variant

Reduce-side Join Basic idea: group by join key –Map over both sets of tuples –Emit tuple as value with join key as the intermediate key –Execution framework brings together tuples sharing the same key –Perform actual join in reducer –Similar to a “sort-merge join” in database terminology Two variants –1-to-1 joins –1-to-many and many-to-many joins

Map-side Join: Parallel Scans If datasets are sorted by join key, join can be accomplished by a scan over both datasets How can we accomplish this in parallel? –Partition and sort both datasets in the same manner In MapReduce: –Map over one dataset, read from other corresponding partition –No reducers necessary (unless to repartition or resort) Consistently partitioned datasets: realistic to expect?

In-Memory Join Basic idea: load one dataset into memory, stream over other dataset –Works if R << S and R fits into memory –Called a “hash join” in database terminology MapReduce implementation –Distribute R to all nodes –Map over S, each mapper loads R in memory, hashed by join key –For every tuple in S, look up join key in R –No reducers, unless for regrouping or resorting tuples

In-Memory Join: Variants Striped variant: –R too big to fit into memory? –Divide R into R 1, R 2, R 3, … s.t. each R n fits into memory –Perform in-memory join:  n, R n ⋈ S –Take the union of all join results Memcached join: –Load R into memcached –Replace in-memory hash lookup with memcached lookup

Memcached Join Memcached join: –Load R into memcached –Replace in-memory hash lookup with memcached lookup Capacity and scalability? –Memcached capacity >> RAM of individual node –Memcached scales out with cluster Latency? –Memcached is fast (basically, speed of network) –Batch requests to amortize latency costs Source: See tech report by Lin et al. (2009)

Which join to use? In-memory join > map-side join > reduce- side join –Why? Limitations of each? –In-memory join: memory –Map-side join: sort order and partitioning –Reduce-side join: general purpose

Processing Relational Data: Summary MapReduce algorithms for processing relational data: –Group by, sorting, partitioning are handled automatically by shuffle/sort in MapReduce –Selection, projection, and other computations (e.g., aggregation), are performed either in mapper or reducer –Multiple strategies for relational joins Complex operations require multiple MapReduce jobs –Example: top ten URLs in terms of average time spent –Opportunities for automatic optimization