MapReduce With a heavy debt to: Google Map Reduce OSDI 2004 slides code.google.com.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Intro to Map-Reduce Feb 21, map-reduce? A programming model or abstraction. A novel way of thinking about designing a solution to certain problems…
MapReduce Simplified Data Processing on Large Clusters
MapReduce.
Overview of MapReduce and Hadoop
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
Parallel Computing MapReduce Examples Parallel Efficiency Assignment
Intro to Map-Reduce Feb 4, map-reduce? A programming model or abstraction. A novel way of thinking about designing a solution to certain problems…
Distributed Computations
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
Google’s Map Reduce. Commodity Clusters Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture.
An Introduction to MapReduce: Abstractions and Beyond! -by- Timothy Carlstrom Joshua Dick Gerard Dwan Eric Griffel Zachary Kleinfeld Peter Lucia Evan May.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Introduction to Google MapReduce WING Group Meeting 13 Oct 2006 Hendra Setiawan.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
Committed to Deliver….  We are Leaders in Hadoop Ecosystem.  We support, maintain, monitor and provide services over Hadoop whether you run apache Hadoop,
Map Reduce: Simplified Data Processing On Large Clusters Jeffery Dean and Sanjay Ghemawat (Google Inc.) OSDI 2004 (Operating Systems Design and Implementation)
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
1 The Map-Reduce Framework Compiled by Mark Silberstein, using slides from Dan Weld’s class at U. Washington, Yaniv Carmeli and some other.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
HAMS Technologies 1
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
MapReduce How to painlessly process terabytes of data.
Google’s MapReduce Connor Poske Florida State University.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Benchmarking MapReduce-Style Parallel Computing Randal E. Bryant Carnegie Mellon University.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
Information Retrieval Lecture 9. Outline Map Reduce, cont. Index compression [Amazon Web Services]
SLIDE 1IS 240 – Spring 2013 MapReduce, HBase, and Hive University of California, Berkeley School of Information IS 257: Database Management.
Before we start, please download: VirtualBox: – The Hortonworks Data Platform: –
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
BIG DATA/ Hadoop Interview Questions.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Lecture 4. MapReduce Instructor: Weidong Shi (Larry), PhD
Big Data is a Big Deal!.
Introduction to Google MapReduce
Map Reduce.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Central Florida Business Intelligence User Group
Lecture 3: Bringing it all together
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
MapReduce Simplied Data Processing on Large Clusters
Hadoop Basics.
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Charles Tappert Seidenberg School of CSIS, Pace University
Introduction to MapReduce
Google Map Reduce OSDI 2004 slides
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

MapReduce With a heavy debt to: Google Map Reduce OSDI 2004 slides code.google.com

You are an engineer at: Hare-brained-scheme.com Your boss, comes to your office and says: “We’re going to be hog-nasty rich! We just need a program to search for strings in text files...” Input:, Output: list of files containing

One solution public class StringFinder { int main(…) { foreach(File f in getInputFiles()) { if(f.contains(searchTerm)) results.add(f.getFileName()); } System.out.println(“Files:” + results.toString()); } “But, uh, marketing says we have to search a lot of files. More than will fit on one disk…”

Another solution Throw hardware at the problem! Use your StringFinder class as is… but get lots of disks! “But, uh, well, marketing says it’s too slow…and besides, we need it to work on the web…”

Third Time’s a charm Web Server StringFinder Indexed data Search query 1.How do we distribute the searchable files on our machines? 2. What if our webserver goes down? 3. What if a StringFinder machine dies? How would you know it was dead? 4. What if marketing comes and says, “well, we also want to show pictures of the earth from space too! Ooh..and the moon too!” StringFinder Indexed data StringFinder Indexed data

StringFinder was the easy part! You really need general infrastructure. Many different tasks Want to use hundreds or thousands of PC’s Continue to function if something breaks Must be easy to program… MapReduce addresses this problem!

MapReduce Programming model + infrastructure Write programs that run on lots of machines Automatic parallelization and distribution Fault-tolerance Scheduling, status and monitoring Cool. What’s the catch?

MapReduce Programming Model Input & Output: sets of pairs Programmer writes 2 functions: map (in_key, in_value) -> list(out_key, intermediate_value) Processes pairs Produces intermediate pairs reduce (out_key, list(interm_val)) -> list(out_value) Combines intermediate values for a key Produces a merged set of outputs

Example: Counting Words… map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result)); MapReduce handles all the other details!

How does parallelization work? INPUT FILE(s)

All you have to do is… Make all your programs into MapReduce algorithms… MapReduce is Turing complete! Can we really make any program this way?

Your Project: A Search Engine $> hadoop -j search.jar “out damned spot” Search Complete 10 relevant files found Terms: out, damned, spot macbeth (MAX_RELEVANCE)...Out, damned spot!......Like Valor's minion carved out his passage......What hands are here? Ha, they pluck out mine eyes!......MACBETH. Hang out our banners on the outward walls;......Lady Macbeth is carried out.... …

Search Engine Components Use Hadoop (open source MapReduce) 5 pairs of Map/Reduce classes – Search Index: make the search fast – Summary Index: make summarizing fast – Search: the actual search – Winnow: choose the most relevant results – Present: generate some pretty output

Search Engine Overview

Example: Indexing (2) public void map() { String line = value.toString(); StringTokenizer itr = new StringTokenizer(line); if(itr.countTokens() >= N) { while(itr.hasMoreTokens()) { word = itr.nextToken()+“|”+key.getFileName(); output.collect(word, 1); } Input: a line of text, e.g. “mistakes were made” from myfile.txt Output: mistakes|myfile.txt were|myfile.txt made|myfile.txt

Example: Indexing (3) public void reduce() { int sum = 0; while(values.hasNext()) { sum += values.next().get(); } output.collect(key, sum); } Input: a pair, list of occurrences (e.g. {1, 1,..1}) Output: mistakes|myfile.txt 10 were|myfile.txt 45 made|myfile.txt 2

Have fun! (Read the documentation!)