Google Map Reduce OSDI 2004 slides

Slides:



Advertisements
Similar presentations
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
Advertisements

MapReduce With a heavy debt to: Google Map Reduce OSDI 2004 slides code.google.com.
MAP-REDUCE: WIN EPIC WIN MAP-REDUCE: WIN -OR- EPIC WIN CSC313: Advanced Programming Topics.
Intro to Map-Reduce Feb 4, map-reduce? A programming model or abstraction. A novel way of thinking about designing a solution to certain problems…
Distributed Computations
MapReduce: Simplified Data Processing on Large Clusters Cloud Computing Seminar SEECS, NUST By Dr. Zahid Anwar.
An Introduction to MapReduce: Abstractions and Beyond! -by- Timothy Carlstrom Joshua Dick Gerard Dwan Eric Griffel Zachary Kleinfeld Peter Lucia Evan May.
1 INF 2914 Information Retrieval and Web Search Lecture 6: Index Construction These slides are adapted from Stanford’s class CS276 / LING 286 Information.
Distributed Computations MapReduce
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
Introduction to MapReduce Amit K Singh. “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Do you recognize this.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce – An overview Medha Atre (May 7, 2008) Dept of Computer Science Rensselaer Polytechnic Institute.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Map Reduce: Simplified Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat Google, Inc. OSDI ’04: 6 th Symposium on Operating Systems Design.
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
Web Search and Text Mining Lecture 3. Outline Distributed programming: MapReduce Distributed indexing Several other examples using MapReduce Zones in.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
1 MapReduce: Theory and Implementation CSE 490h – Intro to Distributed Computing, Modified by George Lee Except as otherwise noted, the content of this.
PPCC Spring Map Reduce1 MapReduce Prof. Chris Carothers Computer Science Department
MapReduce Programming Model. HP Cluster Computing Challenges  Programmability: need to parallelize algorithms manually  Must look at problems from parallel.
Information Retrieval Lecture 9. Outline Map Reduce, cont. Index compression [Amazon Web Services]
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Chapter 5 Ranking with Indexes 1. 2 More Indexing Techniques n Indexing techniques:  Inverted files - best choice for most applications  Suffix trees.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
Team3: Xiaokui Shu, Ron Cohen CS5604 at Virginia Tech December 6, 2010.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
MapReduce: Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc.
Lecture 4. MapReduce Instructor: Weidong Shi (Larry), PhD
Lecture 4: Mapreduce and Hadoop
Image taken from: slideshare
Some slides adapted from those of Yuan Yu and Michael Isard
Introduction to Google MapReduce
Distributed Programming in “Big Data” Systems Pramod Bhatotia wp
Map Reduce.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
15-826: Multimedia Databases and Data Mining
Adapted from: Google & UWash’s Creative Common MR Deck
Central Florida Business Intelligence User Group
Lecture 3. MapReduce Instructor: Weidong Shi (Larry), PhD
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
MapReduce Simplied Data Processing on Large Clusters
Cse 344 May 2nd – Map/reduce.
February 26th – Map/Reduce
Hadoop Basics.
Map reduce use case Giuseppe Andronico INFN Sez. CT & Consorzio COMETA
Cse 344 May 4th – Map/Reduce.
CS110: Discussion about Spark
Ch 4. The Evolution of Analytic Scalability
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Distributed System Gang Wu Spring,2018.
Cloud Computing MapReduce, Batch Processing
Chapter X: Big Data.
Introduction to MapReduce
CS639: Data Management for Data Science
CS639: Data Management for Data Science
CS639: Data Management for Data Science
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Google Map Reduce OSDI 2004 slides cs372 Spring 2009, UT Austin With a heavy debt to: Google Map Reduce OSDI 2004 slides code.google.com hadoop.apache.org

You are an engineer at: Hare-brained-scheme.com Your boss, comes to your office and says: “We’re going to be hog-nasty rich! We just need a program to search for strings in text files...” Input: <search_term>, <files> Output: list of files containing <search_term>

One solution public class StringFinder { int main(…) { foreach(File f in getInputFiles()) { if(f.contains(searchTerm)) results.add(f.getFileName()); } System.out.println(“Files:” + results.toString()); } “But, uh, marketing says we have to search a lot of files. More than will fit on one disk…”

Another solution Throw hardware at the problem! Use your StringFinder class on one machine… but attach lots of disks! “But, uh, well, marketing says it’s too slow…and besides, we need it to work on the web…”

Third Time’s a charm StringFinder Web Server StringFinder StringFinder PCs StringFinder Indexed data Web Server Search query StringFinder Indexed data StringFinder Indexed data How do we distribute the searchable files on our machines? 2. What if our webserver goes down? 3. What if a StringFinder machine dies? How would you know it was dead? 4. What if marketing comes and says, “well, we also want to show pictures of the earth from space too! Ooh..and the moon too!”

StringFinder was the easy part! You really need general infrastructure. Likely to have many different tasks Want to use hundreds or thousands of PC’s Continue to function if something breaks Must be easy to program… MapReduce addresses this problem!

MapReduce Programming model + infrastructure Write programs that run on lots of machines Automatic parallelization and distribution Fault-tolerance Scheduling, status and monitoring Cool. What’s the catch?

MapReduce Programming Model Input & Output: sets of <key, value> pairs Programmer writes 2 functions: map (in_key, in_value) -> list(out_key, intermediate_value) Processes <k,v> pairs Produces intermediate pairs reduce (out_key, list(interm_val)) -> list(out_value) Combines intermediate values for a key Produces a merged set of outputs (may be also <k,v> pairs)

Example: Counting Words… map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result)); MapReduce handles all the other details!

How does parallelization work? MAP REDUCE INPUT FILE(s)

All you have to do is… Make all your programs into MapReduce algorithms… Can we really make any program this way? MapReduce is Turing complete!

Your Project: A Search Engine $> hadoop -j search.jar “out damned spot” Search Complete 10 relevant files found Terms: out, damned, spot macbeth (MAX_RELEVANCE) ...Out, damned spot!... ...Like Valor's minion carved out his passage... ...What hands are here? Ha, they pluck out mine eyes!... ...MACBETH. Hang out our banners on the outward walls;... ...Lady Macbeth is carried out.... …

Search Engine Components Use Hadoop (open source MapReduce) 5 pairs of Map/Reduce classes Search Index: make the search fast Summary Index: make summarizing fast Search: the actual search Winnow: choose the most relevant results Present: generate some pretty output

Indexing Search engine work like the index in a textbook. Don’t read the whole book looking for “Wallabees”, use the index! N-gram indexing An n-gram is a group of words “wallabee”: 1-gram “wallabees hop”: 2-gram (bigram) “wallabees are cool”: 3-gram … Our search engine works by using building N-gram indexes, and generates output using a Summary index

Search Engine Overview

Example: Indexing (2) public void map() { String line = value.toString(); StringTokenizer itr = new StringTokenizer(line); if(itr.countTokens() >= N) { while(itr.hasMoreTokens()) { word = itr.nextToken()+“|”+key.getFileName(); output.collect(word, 1); } Input: a line of text, e.g. “mistakes were made” from myfile.txt Output: <mistakes|myfile.txt , 1> <were|myfile.txt , 1> <made|myfile.txt , 1>

Example: Indexing (3) public void reduce() { int sum = 0; while(values.hasNext()) { sum += values.next().get(); } output.collect(key, sum); Input: a <term,filename> pair, list of occurrences (e.g. <“mistakes|myfile.txt”, {1, 1,..1}>) Output: <mistakes|myfile.txt, 10>

Important Minutiae HDFS Debugging/Eclipse plug-ins Where does my output go? Debugging/Eclipse plug-ins We encourage you to use CS machines…

Have fun! (Read the documentation!)