Large-Scale Data Processing with MapReduce

Slides:



Advertisements
Similar presentations
 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
Advertisements

MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Database and MapReduce Based on slides from Jimmy Lin’s lecture slides ( d-2010-Spring/index.html) (licensed under.
大规模数据处理 / 云计算 Lecture 4 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 4/24/2011 This work is licensed under a Creative.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, February 21, 2013 Session 5: Graph Processing This work is licensed.
大规模数据处理 / 云计算 Lecture 6 – Graph Algorithm 彭波 北京大学信息科学技术学院 4/26/2011 This work is licensed under a Creative Commons.
Thanks to Jimmy Lin slides
Cloud Computing Lecture #3 More MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, September 10, 2008 This work is licensed under a Creative.
MapReduce Algorithm Design Data-Intensive Information Processing Applications ― Session #3 Jimmy Lin University of Maryland Tuesday, February 9, 2010 This.
MapReduce and databases Data-Intensive Information Processing Applications ― Session #7 Jimmy Lin University of Maryland Tuesday, March 23, 2010 This work.
Jimmy Lin The iSchool University of Maryland Wednesday, April 15, 2009
Cloud Computing Lecture #5 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, October 1, 2008 This work is licensed.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland Tuesday, June 29, 2010 This work is licensed.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Cloud Computing Lecture #4 Graph Algorithms with MapReduce Jimmy Lin The iSchool University of Maryland Wednesday, February 6, 2008 This work is licensed.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Data-Intensive Computing with MapReduce
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
HADOOP ADMIN: Session -2
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz University of Maryland MLG, January, 2014 Jaehwan Lee.
CS506/606: Problem Solving with Large Clusters Zak Shafran, Richard Sproat Spring 2011 Introduction URL:
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
MapReduce.
1 The Google File System Reporter: You-Wei Zhang.
CSCI-2950u :: Data-Intensive Scalable Computing Rodrigo Fonseca (rfonseca)
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Introduction to Hadoop and HDFS
Massive Data Processing 02: MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/1/2014 This work is licensed under a Creative Commons.
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
大规模数据处理 / 云计算 Lecture 3 – MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/12/2011 This work is licensed under a Creative Commons.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
Graph Algorithms. Graph Algorithms: Topics  Introduction to graph algorithms and graph represent ations  Single Source Shortest Path (SSSP) problem.
大规模数据处理 / 云计算 Lecture 5 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 7/19/2011 This work is licensed under a Creative.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Chris Olston Benjamin Reed Utkarsh Srivastava Ravi Kumar Andrew Tomkins Pig Latin: A Not-So-Foreign Language For Data Processing Research.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
MapReduce and Data Management Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
HADOOP DISTRIBUTED FILE SYSTEM HDFS Reliability Based on “The Hadoop Distributed File System” K. Shvachko et al., MSST 2010 Michael Tsitrin 26/05/13.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
大规模数据处理 / 云计算 Lecture 3 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/16/2013 This work is licensed under a Creative.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Big Data Infrastructure Week 5: Analyzing Graphs (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United.
Big Data Infrastructure Week 5: Analyzing Graphs (1/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Tallahassee, Florida, 2016 COP5725 Advanced Database Systems MapReduce Spring 2016.
大规模数据处理 / 云计算 05 – Graph Algorithm 闫宏飞 北京大学信息科学技术学院 7/22/2014 Jimmy Lin University of Maryland SEWMGroup This work.
Introduction to MapReduce Jimmy Lin University of Maryland Tuesday, January 26, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Jimmy Lin and Michael Schatz Design Patterns for Efficient Graph Algorithms in MapReduce Michele Iovino Facoltà di Ingegneria dell’Informazione, Informatica.
Big Data Infrastructure Week 7: Analyzing Relational Data (2/3) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Big Data Infrastructure
Large-scale file systems and Map-Reduce
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
MapReduce and Data Management
Cloud Computing Lecture #4 Graph Algorithms with MapReduce
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Graph Algorithms Adapted from UMD Jimmy Lin’s slides, which is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Presentation transcript:

Large-Scale Data Processing with MapReduce SIKS/BigGrid Big Data Tutorial Jimmy Lin University of Maryland Nov. 30 & Dec. 1, 2011 These slides are available on my homepage at http://www.umiacs.umd.edu/~jimmylin/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

First things first… About me Course history Audience survey

Agenda Today: Tomorrow: Setting the stage: Why large data? Why is this different? Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Tomorrow: Text retrieval Managing relational data Graph algorithms

Expectations Focus on “thinking at scale” Deconstruction into “design patterns” Basic intuitions, not fancy math Mapping well-known algorithms to MapReduce Not a tutorial on programming Hadoop Entry point to book

Setting the Stage: Why large data? Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Setting the Stage: Why large data?

Source: Wikipedia (Everest)

36 PB of user data + 80-90 TB/day (6/2010) processes 20 PB a day (2008) 150 PB on 50k+ servers running 15k apps 9 PB of user data + >50 TB/day (11/2011) Wayback Machine: 3 PB + 100 TB/month (3/2009) 36 PB of user data + 80-90 TB/day (6/2010) LHC: ~15 PB a year (at full capacity) S3: 449B objects, peak 290k request/second (7/2011) LSST: 6-10 PB a year (~2015) 640K ought to be enough for anybody. How much data?

No data like more data! s/knowledge/data/g; (Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007) How do we get here if we’re not Google?

cheap commodity clusters + simple, distributed programming models = data-intensive computing for the masses!

Setting the Stage: Why is this different? Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Setting the Stage: Why is this different?

Parallel computing is hard! Fundamental issues Different programming models scheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, … Message Passing P1 P2 P3 P4 P5 Shared Memory P1 P2 P3 P4 P5 Memory Architectural issues Flynn’s taxonomy (SIMD, MIMD, etc.), network typology, bisection bandwidth UMA vs. NUMA, cache coherence Common problems livelock, deadlock, data starvation, priority inversion… dining philosophers, sleeping barbers, cigarette smokers, … Different programming constructs producer consumer mutexes, conditional variables, barriers, … masters/slaves, producers/consumers, work queues, … master slaves work queue The reality: programmer shoulders the burden of managing concurrency… (I want my students developing new algorithms, not debugging race conditions)

Where the rubber meets the road Concurrency is difficult to reason about At the scale of datacenters (even across datacenters) In the presence of failures In terms of multiple interacting services The reality: Lots of one-off solutions, custom code Write you own dedicated library, then program with it Burden on the programmer to explicitly manage everything

Source: Ricardo Guimarães Herrmann

The datacenter is the computer! I think there is a world market for about five computers. The datacenter is the computer! Source: NY Times (6/14/2006)

What’s the point? It’s all about the right level of abstraction Hide system-level details from the developers No more race conditions, lock contention, etc. Separating the what from how Developer specifies the computation that needs to be performed Execution framework (“runtime”) handles actual execution The datacenter is the computer!

“Big Ideas” Scale “out”, not “up” Move processing to the data Limits of SMP and large shared-memory machines Move processing to the data Cluster have limited bandwidth Process data sequentially, avoid random access Seeks are expensive, disk throughput is reasonable Seamless scalability From the mythical man-month to the tradable machine-hour

Building Blocks Source: Barroso and Urs Hölzle (2009)

Storage Hierarchy Funny story about sense of scale… Source: Barroso and Urs Hölzle (2009)

Storage Hierarchy Source: Barroso and Urs Hölzle (2009)

Anatomy of a Datacenter Source: Barroso and Urs Hölzle (2009)

Source: NY Times (6/14/2006)

Source: www.robinmajumdar.com

Source: Harper’s (Feb, 2008)

Source: Wikipedia (The Dalles, Oregon)

Source: Bonneville Power Administration

Introduction to MapReduce Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Introduction to MapReduce

Typical Large-Data Problem Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Map Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

Roots in Functional Programming Map f f f f f Fold g g g g g

MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else…

Shuffle and Sort: aggregate values by keys map map map map b a 1 2 c 3 6 a c 5 2 b c 7 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”?

MapReduce “Runtime” Handles scheduling Handles “data distribution” Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS

MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic

Shuffle and Sort: aggregate values by keys map map map map b a 1 2 c 3 6 a c 5 2 b c 7 8 combine combine combine combine b a 1 2 c 9 a c 5 2 b c 7 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 9 8 c 2 3 6 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

Two more details… Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers

“Hello World”: Word Count

MapReduce can refer to… The programming model The execution framework (aka “runtime”) The specific implementation Usage is usually clear from context!

MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Original development led by Yahoo Now an Apache open source project Emerging as the de facto big data stack Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc. Includes variations of the basic programming model Most of these slides are focused on Hadoop

Input files Map phase Intermediate files (on local disk) Reduce phase User Program (1) submit Master (2) schedule map (2) schedule reduce worker split 0 (6) write output file 0 split 1 (5) remote read worker (3) read split 2 (4) local write worker split 3 split 4 output file 1 worker worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004)

How do we get data to the workers? NAS SAN Compute Nodes What’s the problem here?

Distributed File System Don’t move data to workers… move workers to the data! Store data on the local disks of nodes in the cluster Start up the workers on the node that has the data local A distributed file system is the answer GFS (Google File System) for Google’s MapReduce HDFS (Hadoop Distributed File System) for Hadoop

GFS: Assumptions Commodity hardware over “exotic” hardware Scale “out”, not “up” High component failure rates Inexpensive commodity components fail all the time “Modest” number of huge files Multi-gigabyte files are common, if not encouraged Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

GFS: Design Decisions Files stored as chunks Fixed size (64MB) Reliability through replication Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large datasets, streaming reads Simplify the API Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas)

From GFS to HDFS Terminology differences: Functional differences: GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes Functional differences: File appends in HDFS is relatively new HDFS performance is (likely) slower For the most part, we’ll use the Hadoop terminology…

HDFS Architecture … … HDFS namenode Application /foo/bar (file name, block id) File namespace HDFS Client block 3df2 (block id, block location) HDFS datanode Linux file system … instructions to datanode HDFS datanode Linux file system … datanode state (block id, byte range) block data Adapted from (Ghemawat et al., SOSP 2003)

Namenode Responsibilities Managing the file system namespace: Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. Coordinating file operations: Directs clients to datanodes for reads and writes No data is moved through the namenode Maintaining overall health: Periodic communication with the datanodes Block re-replication and rebalancing Garbage collection

Putting everything together… namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system … … … slave node slave node slave node

MapReduce Algorithm Design Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms MapReduce Algorithm Design

MapReduce: Recap Programmers must specify: Optionally, also: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are reduced together Optionally, also: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic The execution framework handles everything else…

Shuffle and Sort: aggregate values by keys map map map map b a 1 2 c 3 6 a c 5 2 b c 7 8 combine combine combine combine b a 1 2 c 9 a c 5 2 b c 7 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

“Everything Else” The execution framework handles everything else… Scheduling: assigns workers to map and reduce tasks “Data distribution”: moves processes to data Synchronization: gathers, sorts, and shuffles intermediate data Errors and faults: detects worker failures and restarts Limited control over data and execution flow All algorithms must expressed in m, r, c, p You don’t know: Where mappers and reducers run When a mapper or reducer begins or finishes Which input a particular mapper is processing Which intermediate key a particular reducer is processing

Tools for Synchronization Cleverly-constructed data structures Bring partial results together Sort order of intermediate keys Control order in which reducers process keys Partitioner Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values

Designing Scalable Algorithms Ideal scaling characteristics: Twice the data, twice the running time Twice the resources, half the running time Running time: Anything more than linear algorithms won’t scale Constants matter (more than you think)* * But, Knuth’s advice should still be heeded.

Scalable Hadoop Algorithms: Themes Avoid object creation Inherently costly operation Garbage collection Avoid buffering Limited heap size Works for small datasets, but won’t scale! Avoid communication Reduce cross-node dependencies via local aggregation Combiners can help… but even better approaches available

Shuffle and Sort other reducers other mappers Mapper intermediate files (on disk) merged spills (on disk) Reducer Combiner circular buffer (in memory) Combiner other reducers spills (on disk) other mappers

Word Count: Baseline What’s the impact of combiners?

Word Count: Version 1 Are combiners still needed?

Preserving State Mapper object Reducer object one object per task configure configure API initialization hook map reduce one call per input key-value pair one call per intermediate key close close API cleanup hook

Word Count: Version 2 Are combiners still needed? Key: preserve state across input key-value pairs! Are combiners still needed?

Design Pattern for Local Aggregation “In-mapper combining” Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Advantages Speed Why is this faster than actual combiners? Disadvantages Explicit memory management required Potential for order-dependent bugs

Combiner Design Combiners and reducers share same method signature Sometimes, reducers can serve as combiners Often, not… Remember: combiner are optional optimizations Should not affect algorithm correctness May be run 0, 1, or multiple times Example: find average of all integers associated with the same key

Computing the Mean: Version 1 Why can’t we use reducer as combiner?

Computing the Mean: Version 2 Why doesn’t this work?

Computing the Mean: Version 3 Fixed?

Computing the Mean: Version 4 Are combiners still needed?

“Count and Normalize” Many algorithms reduce to estimating relative frequencies: In the case of EM, pseudo-counts instead of actual counts For a large class of algorithms: intuition is the same, just varying complexity in terms of bookkeeping Let’s start with the intuition…

Algorithm Design: Running Example Term co-occurrence matrix for a text collection M = N x N matrix (N = vocabulary size) Mij: number of times i and j co-occur in some context (for concreteness, let’s say context = sentence) Why? Distributional profiles as a way of measuring semantic distance Semantic distance useful for many language processing tasks

MapReduce: Large Counting Problems Term co-occurrence matrix for a text collection = specific instance of a large counting problem A large event space (number of terms) A large number of observations (the collection itself) Goal: keep track of interesting statistics about the events Basic approach Mappers generate partial counts Reducers aggregate partial counts How do we aggregate partial counts efficiently?

First Try: “Pairs” Each mapper takes a sentence: Generate all co-occurring term pairs For all pairs, emit (a, b) → count Reducers sum up counts associated with these pairs Use combiners!

Pairs: Pseudo-Code

“Pairs” Analysis Advantages Disadvantages Easy to implement, easy to understand Disadvantages Lots of pairs to sort and shuffle around (upper bound?) Not many opportunities for combiners to work

Another Try: “Stripes” Idea: group together pairs into an associative array Each mapper takes a sentence: Generate all co-occurring term pairs For each term, emit a → { b: countb, c: countc, d: countd … } Reducers perform element-wise sum of associative arrays (a, b) → 1 (a, c) → 2 (a, d) → 5 (a, e) → 3 (a, f) → 2 a → { b: 1, c: 2, d: 5, e: 3, f: 2 } a → { b: 1, d: 5, e: 3 } a → { b: 1, c: 2, d: 2, f: 2 } a → { b: 2, c: 2, d: 7, e: 3, f: 2 } + Key: cleverly-constructed data structure brings together partial results

Stripes: Pseudo-Code

“Stripes” Analysis Advantages Disadvantages Far less sorting and shuffling of key-value pairs Can make better use of combiners Disadvantages More difficult to implement Underlying object more heavyweight Fundamental limitation in terms of size of event space

Cluster size: 38 cores Data Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)

Relative Frequencies How do we estimate relative frequencies from counts? Why do we want to do this? How do we do this with MapReduce?

f(B|A): “Stripes” Easy! a → {b1:3, b2 :12, b3 :7, b4 :1, … } One pass to compute (a, *) Another pass to directly compute f(B|A) a → {b1:3, b2 :12, b3 :7, b4 :1, … }

f(B|A): “Pairs” For this to work: Must emit extra (a, *) for every bn in mapper Must make sure all a’s get sent to same reducer (use partitioner) Must make sure (a, *) comes first (define sort order) Must hold state in reducer across different key-value pairs (a, *) → 32 Reducer holds this value in memory (a, b1) → 3 (a, b2) → 12 (a, b3) → 7 (a, b4) → 1 … (a, b1) → 3 / 32 (a, b2) → 12 / 32 (a, b3) → 7 / 32 (a, b4) → 1 / 32 …

“Order Inversion” Common design pattern Optimizations Computing relative frequencies requires marginal counts But marginal cannot be computed until you see all counts Buffering is a bad idea! Trick: getting the marginal counts to arrive at the reducer before the joint counts Optimizations Apply in-memory combining pattern to accumulate marginal counts Should we apply combiners?

Synchronization: Pairs vs. Stripes Approach 1: turn synchronization into an ordering problem Sort keys into correct order of computation Partition key space so that each reducer gets the appropriate set of partial results Hold state in reducer across multiple key-value pairs to perform computation Illustrated by the “pairs” approach Approach 2: construct data structures that bring partial results together Each reducer receives all the data it needs to complete the computation Illustrated by the “stripes” approach

Secondary Sorting MapReduce sorts input to reducers by key Values may be arbitrarily ordered What if want to sort value also? E.g., k → (v1, r), (v3, r), (v4, r), (v8, r)…

Secondary Sorting: Solutions Buffer values in memory, then sort Why is this a bad idea? Solution 2: “Value-to-key conversion” design pattern: form composite intermediate key, (k, v1) Let execution framework do the sorting Preserve state across multiple key-value pairs to handle processing Anything else we need to do?

Recap: Tools for Synchronization Cleverly-constructed data structures Bring data together Sort order of intermediate keys Control order in which reducers process keys Partitioner Control which reducer processes which keys Preserving state in mappers and reducers Capture dependencies across multiple keys and values

Issues and Tradeoffs Number of key-value pairs Object creation overhead Time for sorting and shuffling pairs across the network Size of each key-value pair De/serialization overhead Local aggregation Opportunities to perform local aggregation varies Combiners make a big difference Combiners vs. in-mapper combining RAM vs. disk vs. network

Hadoop Ecosystem Tour Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Hadoop Ecosystem Tour

From GFS to Bigtable Google’s GFS is a distributed file system Bigtable is a storage system for structured data Built on top of GFS Solves many GFS issues: real-time access, short files, short reads Serves as a source and a sink for MapReduce jobs

Bigtable: Data Model A table is a sparse, distributed, persistent multidimensional sorted map Map indexed by a row key, column key, and a timestamp (row:string, column:string, time:int64)  uninterpreted byte array Supports lookups, inserts, deletes Single row transactions only Image Source: Chang et al., OSDI 2006

HBase Image Source: http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html

The datacenter is the computer! It’s all about the right level of abstraction Source: NY Times (6/14/2006)

Need for High-Level Languages Hadoop is great for large-data processing! But writing Java programs for everything is verbose and slow Analysts don’t want to (or can’t) write Java Solution: develop higher-level data processing languages Hive: HQL is like SQL Pig: Pig Latin is a dataflow language

Hive and Pig Hive: data warehousing application in Hadoop Query language is HQL, variant of SQL Tables stored on HDFS as flat files Developed by Facebook, now open source Pig: large-scale data processing system Scripts are written in Pig Latin, a dataflow language Developed by Yahoo!, now open source Roughly 1/3 of all Yahoo! internal jobs Common idea: Provide higher-level language to facilitate large-data processing Higher-level language “compiles down” to Hadoop jobs

Hive: Example Hive looks similar to an SQL database Relational join on two tables: Table of word counts from Shakespeare collection Table of word counts from the bible SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; the 25848 62394 I 23031 8854 and 19671 38985 to 18038 13526 of 16700 34654 a 14170 8057 you 12702 2720 my 11297 4135 in 10797 12445 is 8882 6884 Source: Material drawn from Cloudera training VM

Hive: Behind the Scenes SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; (Abstract Syntax Tree) (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10))) (one or more of MapReduce jobs)

Hive: Behind the Scenes STAGE DEPENDENCIES: Stage-1 is a root stage Stage-2 depends on stages: Stage-1 Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: s TableScan alias: s Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: tag: 0 value expressions: expr: freq type: int k alias: k tag: 1 Stage: Stage-2 Map Reduce Alias -> Map Operator Tree: hdfs://localhost:8022/tmp/hive-training/364214370/10002 Reduce Output Operator key expressions: expr: _col1 type: int sort order: - tag: -1 value expressions: expr: _col0 type: string expr: _col2 Reduce Operator Tree: Extract Limit File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Stage: Stage-0 Fetch Operator limit: 10 Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {VALUE._col0} {VALUE._col1} 1 {VALUE._col0} outputColumnNames: _col0, _col1, _col2 Filter Operator predicate: expr: ((_col0 >= 1) and (_col2 >= 1)) type: boolean Select Operator expressions: expr: _col1 type: string expr: _col0 type: int expr: _col2 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.SequenceFileInputFormat output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat

Pig: Example Task: Find the top 10 most visited pages in each category Visits Url Info User Url Time Amy cnn.com 8:00 bbc.com 10:00 flickr.com 10:05 Fred 12:00 Url Category PageRank cnn.com News 0.9 bbc.com 0.8 flickr.com Photos 0.7 espn.com Sports Pig Slides adapted from Olston et al. (SIGMOD 2008)

Pig Query Plan Load Visits Group by url Foreach url Load Url Info generate count Load Url Info Join on url Group by category Foreach category generate top10(urls) Pig Slides adapted from Olston et al. (SIGMOD 2008)

Pig Script visits = load ‘/data/visits’ as (user, url, time); gVisits = group visits by url; visitCounts = foreach gVisits generate url, count(visits); urlInfo = load ‘/data/urlInfo’ as (url, category, pRank); visitCounts = join visitCounts by url, urlInfo by url; gCategories = group visitCounts by category; topUrls = foreach gCategories generate top(visitCounts,10); store topUrls into ‘/data/topUrls’; Pig Slides adapted from Olston et al. (SIGMOD 2008)

Pig Script in Hadoop Map1 Load Visits Group by url Reduce1 Map2 Foreach url generate count Load Url Info Join on url Reduce2 Map3 Group by category Reduce3 Foreach category generate top10(urls) Pig Slides adapted from Olston et al. (SIGMOD 2008)

Text Retrieval Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Text Retrieval

Abstract IR Architecture Documents Query document acquisition (e.g., web crawling) online offline Representation Function Representation Function Query Representation Document Representation Index Comparison Function Hits

“Bag of Words” Terms weights computed as functions of: Term frequency Collection frequency Document frequency Average document length … Well-known weighting functions TF.IDF BM25 Dirichlet scores (LM framework) Similarity boils down to inner products of feature vectors:

Inverted Index tf df one fish, two fish red fish, blue fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc 4 tf 1 2 3 4 df blue 1 1 blue 1 2 1 cat 1 1 cat 1 3 1 egg 1 1 egg 1 4 1 fish 2 2 2 fish 2 1 2 2 2 green 1 1 green 1 4 1 ham 1 1 ham 1 4 1 hat 1 1 hat 1 3 1 one 1 1 one 1 1 1 red 1 1 red 1 2 1 two 1 1 two 1 1 1

Inverted Index: Positional Information one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc 4 tf 1 2 3 4 df blue 1 1 blue 1 2 1 [3] cat 1 1 cat 1 3 1 [1] egg 1 1 egg 1 4 1 [2] fish 2 2 2 fish 2 1 2 [2,4] 2 2 [2,4] green 1 1 green 1 4 1 [1] ham 1 1 ham 1 4 1 [3] hat 1 1 hat 1 3 1 [2] one 1 1 one 1 1 1 [1] red 1 1 red 1 2 1 [1] two 1 1 two 1 1 1 [3]

Retrieval in a Nutshell Look up postings lists corresponding to query terms Traverse postings for each query term Store partial query-document scores in accumulators Select top k results to return

Retrieval: Document-at-a-Time Evaluate documents one at a time (score all query terms) Tradeoffs Small memory footprint (good) Must read through all postings (bad), but skipping possible More disk seeks (bad), but blocking possible blue 2 1 9 21 35 … fish 2 1 3 9 21 34 35 80 … Document score in top k? Accumulators (e.g. priority queue) Yes: Insert document score, extract-min if queue too large No: Do nothing

Retrieval: Term-at-a-Time Evaluate documents one query term at a time Usually, starting from most rare term (often with tf-sorted postings) Tradeoffs Early termination heuristics (good) Large memory footprint (bad), but filtering heuristics possible blue 2 1 9 21 35 … Accumulators (e.g., hash) Score{q=x}(doc n) = s fish 2 1 3 9 21 34 35 80 …

MapReduce it? Perfect for MapReduce! Uh… not so good… The indexing problem Scalability is critical Must be relatively fast, but need not be real time Fundamentally a batch operation Incremental updates may or may not be important For the web, crawling is a challenge in itself The retrieval problem Must have sub-second response time For the web, only need relatively few results Perfect for MapReduce! Uh… not so good…

Indexing: Performance Analysis Fundamentally, a large sorting problem Terms usually fit in memory Postings usually don’t How is it done on a single machine? How can it be done with MapReduce? First, let’s characterize the problem size: Size of vocabulary Size of postings

Vocabulary Size: Heaps’ Law Heaps’ Law: linear in log-log space Vocabulary size grows unbounded! M is vocabulary size T is collection size (number of documents) k and b are constants Typically, k is between 30 and 100, b is between 0.4 and 0.6 22

Heaps’ Law for RCV1 k = 44 b = 0.49 First 1,000,020 terms: Predicted = 38,323 Actual = 38,365 Reuters-RCV1 collection: 806,791 newswire documents (Aug 20, 1996-August 19, 1997) Manning, Raghavan, Schütze, Introduction to Information Retrieval (2008)

Postings Size: Zipf’s Law Zipf’s Law: (also) linear in log-log space Specific case of Power Law distributions In other words: A few elements occur very frequently Many elements occur very infrequently cf is the collection frequency of i-th common term c is a constant 22

Zipf’s Law for RCV1 Fit isn’t that good… but good enough! Reuters-RCV1 collection: 806,791 newswire documents (Aug 20, 1996-August 19, 1997) Manning, Raghavan, Schütze, Introduction to Information Retrieval (2008)

MapReduce: Index Construction Map over all documents Emit term as key, (docno, tf) as value Emit other information as necessary (e.g., term position) Sort/shuffle: group postings by term Reduce Gather and sort the postings (e.g., by docno or tf) Write postings to disk MapReduce does all the heavy lifting!

Inverted Indexing with MapReduce one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 one 1 1 red 2 1 cat 3 1 Map two 1 1 blue 2 1 hat 3 1 fish 1 2 fish 2 2 Shuffle and Sort: aggregate values by keys cat 3 1 blue Reduce 2 1 fish 1 2 2 2 hat 3 1 one 1 1 two 1 1 red 2 1

Inverted Indexing: Pseudo-Code

Shuffle and Sort: aggregate values by keys Positional Indexes one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 one 1 1 [1] red 2 1 [1] cat 3 1 [1] Map two 1 1 [3] blue 2 1 [3] hat 3 1 [2] fish 1 2 [2,4] fish 2 2 [2,4] Shuffle and Sort: aggregate values by keys cat 3 1 [1] blue Reduce 2 1 [3] fish 1 2 [2,4] 2 2 [2,4] hat 3 1 [2] one 1 1 [1] two 1 1 [3] red 2 1 [1]

Inverted Indexing: Pseudo-Code What’s the problem?

Scalability Bottleneck Initial implementation: terms as keys, postings as values Reducers must buffer all postings associated with key (to sort) What if we run out of memory to buffer postings? Uh oh!

Another Try… Where have we seen this before? How is this different? (key) (values) (keys) (values) fish 1 2 [2,4] fish 1 [2,4] 34 1 [23] fish 9 [9] 21 3 [1,8,22] fish 21 [1,8,22] 35 2 [8,41] fish 34 [23] 80 3 [2,9,76] fish 35 [8,41] 9 1 [9] fish 80 [2,9,76] How is this different? Let the framework do the sorting Term frequency implicitly stored Directly write compressed postings Where have we seen this before?

Postings Encoding Conceptually: In Practice: fish 1 2 9 1 21 3 34 1 35 80 3 … In Practice: Don’t encode docnos, encode gaps (or d-gaps) But it’s not obvious that this save space… fish 1 2 8 1 12 3 13 1 1 2 45 3 …

Overview of Index Compression Byte-aligned vs. bit-aligned Non-parameterized bit-aligned Unary codes  codes  codes Parameterized bit-aligned Golomb codes (local Bernoulli model) Block-based methods Simple-9 PForDelta Want more detail? Start with Managing Gigabytes by Witten, Moffat, and Bell!

Index Compression: Performance Comparison of Index Size (bits per pointer) Bible TREC Unary 262 1918 Binary 15 20  6.51 6.63  6.23 6.38 Golomb 6.09 5.84 One common approach Bible: King James version of the Bible; 31,101 verses (4.3 MB) TREC: TREC disks 1+2; 741,856 docs (2070 MB) Issue: For Golomb compression, optimal b ~ 0.69 (N/df) Which means different b for every term! Witten, Moffat, Bell, Managing Gigabytes (1999)

Chicken and Egg? Sound familiar? (key) (value) fish 1 [2,4] But wait! How do we set the Golomb parameter b? fish 9 [9] fish 21 [1,8,22] Optimal b ~ 0.69 (N/df) fish 34 [23] We need the df to set b… fish 35 [8,41] But we don’t know the df until we’ve seen all postings! fish 80 [2,9,76] … Write directly to disk Sound familiar?

Getting the df In the mapper: In the reducer: Emit “special” key-value pairs to keep track of df In the reducer: Make sure “special” key-value pairs come first: process them to determine df Remember: proper partitioning!

Getting the df: Modified Mapper one fish, two fish Doc 1 Input document… (key) (value) fish 1 [2,4] Emit normal key-value pairs… one 1 [1] two 1 [3] fish  [1] Emit “special” key-value pairs to keep track of df… one  [1] two  [1]

Getting the df: Modified Reducer (key) (value) First, compute the df by summing contributions from all “special” key-value pair… fish  [63] [82] [27] … Compute Golomb parameter b… fish 1 [2,4] fish 9 [9] fish 21 [1,8,22] Important: properly define sort order to make sure “special” key-value pairs come first! fish 34 [23] fish 35 [8,41] fish 80 [2,9,76] Write compressed postings … Where have we seen this before?

MapReduce it? The indexing problem The retrieval problem Scalability is paramount Must be relatively fast, but need not be real time Fundamentally a batch operation Incremental updates may or may not be important For the web, crawling is a challenge in itself The retrieval problem Must have sub-second response time For the web, only need relatively few results

Retrieval with MapReduce? MapReduce is fundamentally batch-oriented Optimized for throughput, not latency Startup of mappers and reducers is expensive MapReduce is not suitable for real-time queries! Use separate infrastructure for retrieval…

Important Ideas The rest is just details! Partitioning (for scalability) Replication (for redundancy) Caching (for speed) Routing (for load balancing) The rest is just details!

Term vs. Document Partitioning Term Partitioning … T3 T Document Partitioning … T D1 D2 D3

Typical Search Architecture brokers partitions … … replicas …

Managing Relational Data Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Managing Relational Data

Managing Relational Data In the “good old days”, organizations used relational databases to manage big data Then along came Hadoop… Where does MapReduce fit in? BTW, Hadoop is “hot” in the SIGMOD community…

Relational Databases vs. MapReduce Multipurpose: analysis and transactions; batch and interactive Data integrity via ACID transactions Lots of tools in software ecosystem (for ingesting, reporting, etc.) Supports SQL (and SQL integration, e.g., JDBC) Automatic SQL query optimization MapReduce (Hadoop): Designed for large clusters, fault tolerant Data is accessed in “native format” Supports many query languages Programmers retain control over performance Open source Source: O’Reilly Blog post by Joseph Hellerstein (11/19/2008)

Database Workloads OLTP (online transaction processing) Typical applications: e-commerce, banking, airline reservations User facing: real-time, low latency, highly-concurrent Tasks: relatively small set of “standard” transactional queries Data access pattern: random reads, updates, writes (involving relatively small amounts of data) OLAP (online analytical processing) Typical applications: business intelligence, data mining Back-end processing: batch workloads, less concurrency Tasks: complex analytical queries, often ad hoc Data access pattern: table scans, large amounts of data involved per query

One Database or Two? Downsides of co-existing OLTP and OLAP workloads Poor memory management Conflicting data access patterns Variable latency Solution: separate databases User-facing OLTP database for high-volume transactions Data warehouse for OLAP workloads How do we connect the two?

OLTP/OLAP Architecture ETL (Extract, Transform, and Load)

OLTP/OLAP Integration OLTP database for user-facing transactions Retain records of all activity Periodic ETL (e.g., nightly) Extract-Transform-Load (ETL) Extract records from source Transform: clean data, check integrity, aggregate, etc. Load into OLAP database OLAP database for data warehousing Business intelligence: reporting, ad hoc queries, data mining, etc. Feedback to improve OLTP services

Business Intelligence Premise: more data leads to better business decisions Periodic reporting as well as ad hoc queries Analysts, not programmers (importance of tools and dashboards) Examples: Slicing-and-dicing activity by different dimensions to better understand the marketplace Analyzing log data to improve OLTP experience Analyzing log data to better optimize ad placement Analyzing purchasing trends for better supply-chain management Mining for correlations between otherwise unrelated activities

OLTP/OLAP Architecture: Hadoop? What about here? ETL (Extract, Transform, and Load) Hadoop here?

OLTP/OLAP/Hadoop Architecture ETL (Extract, Transform, and Load) Why does this make sense?

ETL Bottleneck Reporting is often a nightly task: Hadoop is perfect: ETL is often slow: why? What happens if processing 24 hours of data takes longer than 24 hours? Hadoop is perfect: Most likely, you already have some data warehousing solution Ingest is limited by speed of HDFS Scales out with more nodes Massively parallel Ability to use any processing tool Much cheaper than parallel databases ETL is a batch process anyway!

Working Scenario Two tables: Analyses we might want to perform: User demographics (gender, age, income, etc.) User page visits (URL, time spent, etc.) Analyses we might want to perform: Statistics on demographic characteristics Statistics on page visits Statistics on page visits by URL Statistics on page visits by demographic characteristic … How to perform common relational operations in MapReduce… Except, don’t! (later)

Relational Algebra Primitives Other operations Projection () Selection () Cartesian product () Set union () Set difference () Rename () Other operations Join (⋈) Group by… aggregation …

Projection R1 R1 R2 R2 R3 R3 R4 R4 R5 R5

Projection in MapReduce Easy! Map over tuples, emit new tuples with appropriate attributes No reducers, unless for regrouping or resorting tuples Alternatively: perform in reducer, after some other processing Basically limited by HDFS streaming speeds Speed of encoding/decoding tuples becomes important Relational databases take advantage of compression Semistructured data? No problem!

Selection R1 R2 R1 R3 R3 R4 R5

Selection in MapReduce Easy! Map over tuples, emit only tuples that meet criteria No reducers, unless for regrouping or resorting tuples Alternatively: perform in reducer, after some other processing Basically limited by HDFS streaming speeds Speed of encoding/decoding tuples becomes important Relational databases take advantage of compression Semistructured data? No problem!

Group by… Aggregation Example: What is the average time spent per URL? In SQL: SELECT url, AVG(time) FROM visits GROUP BY url In MapReduce: Map over tuples, emit time, keyed by url Framework automatically groups values by keys Compute average in reducer Optimize with combiners

Relational Joins R1 S1 R2 S2 R3 S3 R4 S4 R1 S2 R2 S4 R3 S1 R4 S3

Types of Relationships Many-to-Many One-to-Many One-to-One

Join Algorithms in MapReduce Reduce-side join Map-side join In-memory join Striped variant Memcached variant

Reduce-side Join Basic idea: group by join key Two variants Map over both sets of tuples Emit tuple as value with join key as the intermediate key Execution framework brings together tuples sharing the same key Perform actual join in reducer Similar to a “sort-merge join” in database terminology Two variants 1-to-1 joins 1-to-many and many-to-many joins

Reduce-side Join: 1-to-1 Map keys values R1 R1 R4 R4 S2 S2 S3 S3 Reduce keys values R1 S2 S3 R4 Note: no guarantee if R is going to come first or S

Reduce-side Join: 1-to-many Map keys values R1 R1 S2 S2 S3 S3 S9 S9 Reduce keys values R1 S2 S3 … What’s the problem?

Reduce-side Join: V-to-K Conversion In reducer… keys values R1 New key encountered: hold in memory Cross with records from other set S2 S3 S9 R4 New key encountered: hold in memory Cross with records from other set S3 S7

Reduce-side Join: many-to-many In reducer… keys values R1 R5 Hold in memory R8 S2 Cross with records from other set S3 S9 What’s the problem?

Map-side Join: Basic Idea Assume two datasets are sorted by the join key: R1 S2 R2 S4 R4 S3 R3 S1 A sequential scan through both datasets to join (called a “merge join” in database terminology)

Map-side Join: Parallel Scans If datasets are sorted by join key, join can be accomplished by a scan over both datasets How can we accomplish this in parallel? Partition and sort both datasets in the same manner In MapReduce: Map over one dataset, read from other corresponding partition No reducers necessary (unless to repartition or resort) Consistently partitioned datasets: realistic to expect?

In-Memory Join Basic idea: load one dataset into memory, stream over other dataset Works if R << S and R fits into memory Called a “hash join” in database terminology MapReduce implementation Distribute R to all nodes Map over S, each mapper loads R in memory, hashed by join key For every tuple in S, look up join key in R No reducers, unless for regrouping or resorting tuples

In-Memory Join: Variants Striped variant: R too big to fit into memory? Divide R into R1, R2, R3, … s.t. each Rn fits into memory Perform in-memory join: n, Rn ⋈ S Take the union of all join results Memcached join: Load R into memcached Replace in-memory hash lookup with memcached lookup

Which join to use? In-memory join > map-side join > reduce-side join Why? Limitations of each? In-memory join: memory Map-side join: sort order and partitioning Reduce-side join: general purpose

Key Features in Databases Common optimizations in relational databases Reducing the amount of data to read Reducing the amount of tuples to decode Data placement Query planning and cost estimation Same ideas can be applied to MapReduce For example, column stores in Google Dremel A few commercialized products Many research prototypes

One size does not fit all… Databases when: You know what the question is: query optimizers work well Well-specified schema, clean data MapReduce when: You don’t necessarily know what the question is: go brute force Exploratory data analysis Semi-structured, noisy, diverse data ETL is the insight-generation process

Graph Algorithms Setting the stage Introduction to MapReduce MapReduce algorithm design Hadoop ecosystem tour Text retrieval Managing relational data Graph algorithms Graph Algorithms

What’s a graph? G = (V,E), where Different types of graphs: V represents the set of vertices (nodes) E represents the set of edges (links) Both vertices and edges may contain additional information Different types of graphs: Directed vs. undirected edges Presence or absence of cycles Graphs are everywhere: Hyperlink structure of the Web Physical structure of computers on the Internet Interstate highway system Social networks

Source: Wikipedia (Königsberg)

Some Graph Problems Finding shortest paths Routing Internet traffic and UPS trucks Finding minimum spanning trees Telco laying down fiber Finding Max Flow Airline scheduling Identify “special” nodes and communities Breaking up terrorist cells, spread of avian flu Bipartite matching Monster.com, Match.com And of course... PageRank

Graphs and MapReduce Graph algorithms typically involve: Performing computations at each node: based on node features, edge features, and local link structure Propagating computations: “traversing” the graph Key questions: How do you represent graph data in MapReduce? How do you traverse a graph in MapReduce?

Representing Graphs G = (V, E) Two common representations Adjacency matrix Adjacency list

Adjacency Matrices Represent a graph as an n x n square matrix M n = |V| Mij = 1 means a link from node i to j 2 1 2 3 4 1 3 4

Adjacency Matrices: Critique Advantages: Amenable to mathematical manipulation Iteration over rows and columns corresponds to computations on outlinks and inlinks Disadvantages: Lots of zeros for sparse matrices Lots of wasted space

Adjacency Lists Take adjacency matrices… and throw away all the zeros 1 2 3 4 1: 2, 4 2: 1, 3, 4 3: 1 4: 1, 3

Adjacency Lists: Critique Advantages: Much more compact representation Easy to compute over outlinks Disadvantages: Much more difficult to compute over inlinks

Single Source Shortest Path Problem: find shortest path from a source node to one or more target nodes Shortest might also mean lowest weight or cost First, a refresher: Dijkstra’s Algorithm

Dijkstra’s Algorithm Example   1 10 2 3 9 4 6 5 7   2 Example from CLR

Dijkstra’s Algorithm Example 10  1 10 2 3 9 4 6 5 7 5  2 Example from CLR

Dijkstra’s Algorithm Example 8 14 1 10 2 3 9 4 6 5 7 5 7 2 Example from CLR

Dijkstra’s Algorithm Example 8 13 1 10 2 3 9 4 6 5 7 5 7 2 Example from CLR

Dijkstra’s Algorithm Example 8 9 1 1 10 2 3 9 4 6 5 7 5 7 2 Example from CLR

Dijkstra’s Algorithm Example 8 9 1 10 2 3 9 4 6 5 7 5 7 2 Example from CLR

Single Source Shortest Path Problem: find shortest path from a source node to one or more target nodes Shortest might also mean lowest weight or cost Single processor machine: Dijkstra’s Algorithm MapReduce: parallel Breadth-First Search (BFS)

Finding the Shortest Path Consider simple case of equal edge weights Solution to the problem can be defined inductively Here’s the intuition: Define: b is reachable from a if b is on adjacency list of a DistanceTo(s) = 0 For all nodes p reachable from s, DistanceTo(p) = 1 For all nodes n reachable from some other set of nodes M, DistanceTo(n) = 1 + min(DistanceTo(m), m  M) d1 m1 … d2 s … n m2 … d3 m3

Source: Wikipedia (Wave)

Visualizing Parallel BFS

From Intuition to Algorithm Data representation: Key: node n Value: d (distance from start), adjacency list (list of nodes reachable from n) Initialization: for all nodes except for start node, d =  Mapper: m  adjacency list: emit (m, d + 1) Sort/Shuffle Groups distances by reachable nodes Reducer: Selects minimum distance path for each reachable node Additional bookkeeping needed to keep track of actual path

Multiple Iterations Needed Each MapReduce iteration advances the “known frontier” by one hop Subsequent iterations include more and more reachable nodes as frontier expands Multiple iterations are needed to explore entire graph Preserving graph structure: Problem: Where did the adjacency list go? Solution: mapper emits (n, adjacency list) as well

BFS Pseudo-Code

Stopping Criterion How many iterations are needed in parallel BFS (equal edge weight case)? When a node is first “discovered”, we’re guaranteed to have found the shortest path

Comparison to Dijkstra Dijkstra’s algorithm is more efficient At any step it only pursues edges from the minimum-cost path inside the frontier MapReduce explores all paths in parallel Lots of “waste” Useful work is only done at the “frontier” Why can’t we do better using MapReduce?

Weighted Edges Now add positive weights to the edges Simple change: adjacency list now includes a weight w for each edge In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m That’s it?

Stopping Criterion Not true! How many iterations are needed in parallel BFS (positive edge weight case)? When a node is first “discovered”, we’re guaranteed to have found the shortest path Not true!

Additional Complexities 10 n1 n2 n3 n4 n5 n6 n7 n8 n9 1 search frontier r s q p

Stopping Criterion How many iterations are needed in parallel BFS (positive edge weight case)? Practicalities of implementation in MapReduce

Graphs and MapReduce Graph algorithms typically involve: Performing computations at each node: based on node features, edge features, and local link structure Propagating computations: “traversing” the graph Generic recipe: Represent graphs as adjacency lists Perform local computations in mapper Pass along partial results via outlinks, keyed by destination node Perform aggregation in reducer on inlinks to a node Iterate until convergence: controlled by external “driver” Don’t forget to pass the graph structure between iterations

Random Walks Over the Web Random surfer model: User starts at a random Web page User randomly clicks on links, surfing from page to page PageRank Characterizes the amount of time spent on any given page Mathematically, a probability distribution over pages PageRank captures notions of page importance Correspondence to human intuition? One of thousands of features used in web search Note: query-independent

PageRank: Defined … Given page x with inlinks t1…tn, where C(t) is the out-degree of t  is probability of random jump N is the total number of nodes in the graph t1 X t2 … tn

Computing PageRank Properties of PageRank Sketch of algorithm: Can be computed iteratively Effects at each iteration are local Sketch of algorithm: Start with seed PRi values Each page distributes PRi “credit” to all pages it links to Each target page adds up “credit” from multiple in-bound links to compute PRi+1 Iterate until values converge

Simplified PageRank First, tackle the simple case: No random jump factor No dangling links Then, factor in these complexities… Why do we need the random jump? Where do dangling links come from?

Sample PageRank Iteration (1) 0.1 n1 (0.2) 0.1 0.1 0.1 0.066 0.066 0.066 n5 (0.2) n3 (0.2) 0.2 0.2 n4 (0.2)

Sample PageRank Iteration (2) 0.033 0.083 n1 (0.066) 0.083 0.033 0.1 0.1 0.1 n5 (0.3) n3 (0.166) 0.3 0.166 n4 (0.3)

PageRank in MapReduce Map Reduce n1 [n2, n4] n2 [n3, n5] n3 [n4]

PageRank Pseudo-Code

Complete PageRank Two additional complexities Solution: What is the proper treatment of dangling nodes? How do we factor in the random jump factor? Solution: Second pass to redistribute “missing PageRank mass” and account for random jumps p is PageRank value from before, p' is updated PageRank value |G| is the number of nodes in the graph m is the missing PageRank mass

PageRank Convergence Alternative convergence criteria Iterate until PageRank values don’t change Iterate until PageRank rankings don’t change Fixed number of iterations Convergence for web graphs?

Beyond PageRank Link structure is important for web search PageRank is one of many link-based features: HITS, SALSA, etc. One of many thousands of features used in ranking… Adversarial nature of web search Link spamming Spider traps Keyword stuffing …

Efficient Graph Algorithms: Tricks In-mapper combining: efficient local aggregation Smarter partitioning: create more opportunities for local aggregation Schimmy: avoid shuffling the graph Jimmy Lin and Michael Schatz. Design Patterns for Efficient Graph Algorithms in MapReduce. Proceedings of the Eighth Workshop on Mining and Learning with Graphs Workshop (MLG-2010), pages 78-85, July 2010, Washington, D.C.

In-Mapper Combining Use combiners Better: in-mapper combining Perform local aggregation on map output Downside: intermediate data is still materialized Better: in-mapper combining Preserve state across multiple map calls, aggregate messages in buffer, emit buffer contents at end Downside: requires memory management buffer configure map close

Better Partitioning Default: hash partitioning Randomly assign nodes to partitions Observation: many graphs exhibit local structure E.g., communities in social networks Better partitioning creates more opportunities for local aggregation Unfortunately, partitioning is hard! Sometimes, chick-and-egg… But cheap heuristics sometimes available For webgraphs: range partition on domain-sorted URLs

Schimmy Design Pattern Basic implementation contains two dataflows: Messages (actual computations) Graph structure (“bookkeeping”) Schimmy: separate the two data flows, shuffle only the messages Basic idea: merge join between graph structure and messages both relations sorted by join key both relations consistently partitioned and sorted by join key S1 S T T1 S2 T2 S3 T3

Do the Schimmy! Schimmy = reduce side parallel merge join between graph structure and messages Consistent partitioning between input and intermediate data Mappers emit only messages (actual computation) Reducers read graph structure directly from HDFS from HDFS (graph structure) intermediate data (messages) from HDFS (graph structure) intermediate data (messages) from HDFS (graph structure) intermediate data (messages) S1 T1 S2 T2 S3 T3 Reducer Reducer Reducer

Experiments Cluster setup: Dataset: Setup: 10 workers, each 2 cores (3.2 GHz Xeon), 4GB RAM, 367 GB disk Hadoop 0.20.0 on RHELS 5.3 Dataset: First English segment of ClueWeb09 collection 50.2m web pages (1.53 TB uncompressed, 247 GB compressed) Extracted webgraph: 1.4 billion links, 7.0 GB Dataset arranged in crawl order Setup: Measured per-iteration running time (5 iterations) 100 partitions

Results “Best Practices”

Results +18% 1.4b 674m

Results +18% 1.4b 674m -15%

Results +18% 1.4b 674m -15% -60% 86m

Results +18% 1.4b 674m -15% -60% 86m -69%

Aside: How to do this better… MapReduce is a poor abstraction for graphs No separation of computation from graph structure Poor locality: unnecessary data movement Bulk synchronous parallel (BSP) as a better model: Google’s Pregel, open source Giraph clone

Bulk Synchronous Parallel (BSP) Computation is modeled as series of supersteps All data held in memory At each iteration: Each vertex invokes a function in parallel Can modify vertex state and edge state Can read messages sent in previous superstep Can send messages (to arbitrary vertices) to be read in next superstep Synchronization barrier between each superstep Interesting (open?) question: how many hammers and how many nails?

Questions?