大规模数据处理 / 云计算 Lecture 3 – MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/12/2011 This work is licensed under a Creative Commons.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
Cloud Computing and MapReduce Used slides from RAD Lab at UC Berkeley about the cloud ( and slides from Jimmy Lin’s.
大规模数据处理 / 云计算 Lecture 4 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 4/24/2011 This work is licensed under a Creative.
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
Distributed Computations
MapReduce Algorithm Design Data-Intensive Information Processing Applications ― Session #3 Jimmy Lin University of Maryland Tuesday, February 9, 2010 This.
Cloud Computing Lecture #2 Introduction to MapReduce Jimmy Lin The iSchool University of Maryland Monday, September 8, 2008 This work is licensed under.
MapReduce and Data Intensive NLP CMSC 723: Computational Linguistics I ― Session #12 Jimmy Lin and Nitin Madnani University of Maryland Wednesday, November.
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Cloud Computing Lecture #2 From Lisp to MapReduce and GFS
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
Distributed Computations MapReduce
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 This work is licensed under a Creative Commons.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
MapReduce.
By: Jeffrey Dean & Sanjay Ghemawat Presented by: Warunika Ranaweera Supervised by: Dr. Nalin Ranasinghe.
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Map Reduce and Hadoop S. Sudarshan, IIT Bombay
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
Intro to Map/Reduce Andrew Rau-Chaplin - Adapted from What is Cloud Computing? (and an intro to parallel/distributed processing), Jimmy Lin, The iSchool.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Presented by CH.Anusha.  Apache Hadoop framework  HDFS and MapReduce  Hadoop distributed file system  JobTracker and TaskTracker  Apache Hadoop NextGen.
Introduction to MapReduce ECE7610. The Age of Big-Data  Big-data age  Facebook collects 500 terabytes a day(2011)  Google collects 20000PB a day (2011)
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
大规模数据处理 / 云计算 Lecture 5 – Hadoop Runtime 彭波 北京大学信息科学技术学院 7/23/2013 This work is licensed under a Creative Commons.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
Massive Data Processing 02: MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/1/2014 This work is licensed under a Creative Commons.
Introduction to MapReduce Data-Intensive Information Processing Applications ― Session #1 Jimmy Lin University of Maryland Tuesday, January 26, 2010 This.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA
大规模数据处理 / 云计算 Lecture 5 – Mapreduce Algorithm Design 彭波 北京大学信息科学技术学院 7/19/2011 This work is licensed under a Creative.
MAP REDUCE BASICS CHAPTER 2. Basics Divide and conquer – Partition large problem into smaller subproblems – Worker work on subproblems in parallel Threads.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
大规模数据处理 / 云计算 Lecture 3 – Mapreduce Algorithm Design 闫宏飞 北京大学信息科学技术学院 7/16/2013 This work is licensed under a Creative.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015.
Big Data Infrastructure Week 2: MapReduce Algorithm Design (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Cloud Distributed Computing Environment Hadoop. Hadoop is an open-source software system that provides a distributed computing environment on cloud (data.
Tallahassee, Florida, 2016 COP5725 Advanced Database Systems MapReduce Spring 2016.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Presenter: Chao-Han Tsai (Some slides adapted from the Google’s series lectures)
Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.
1 Student Date Time Wei Li Nov 30, 2015 Monday 9:00-9:25am Shubbhi Taneja Nov 30, 2015 Monday9:25-9:50am Rodrigo Sanandan Dec 2, 2015 Wednesday9:00-9:25am.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
MapReduce using Hadoop Jan Krüger … in 30 minutes...
Cloud Computing and MapReduce
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Introduction to MapReduce and Hadoop
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Introduction to MapReduce
Distributed System Gang Wu Spring,2018.
Presentation transcript:

大规模数据处理 / 云计算 Lecture 3 – MapReduce Basics 闫宏飞 北京大学信息科学技术学院 7/12/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See for details Jimmy Lin University of Maryland 课程建设 SEWMGroup

How do we scale up?

Source: Wikipedia (IBM Roadrunner)

Divide and Conquer “Work” w1w1 w2w2 w3w3 r1r1 r2r2 r3r3 “Result” “worker” Partition Combine

Parallelization Challenges How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems?

Common Theme? Parallelization problems arise from: –Communication between workers (e.g., to exchange state) –Access to shared resources (e.g., data) Thus, we need a synchronization mechanism

Source: Ricardo Guimarães Herrmann

Managing Multiple Workers Difficult because –We don’t know the order in which workers run –We don’t know when workers interrupt each other –We don’t know the order in which workers access shared data Thus, we need: –Semaphores (lock, unlock) –Conditional variables (wait, notify, broadcast) –Barriers Still, lots of problems: –Deadlock, livelock, race conditions... –Dining philosophers, sleepy barbers, cigarette smokers... Moral of the story: be careful!

Current Tools Programming models –Shared memory (pthreads) –Message passing (MPI) Design Patterns –Master-slaves –Producer-consumer flows –Shared work queues Message Passing P1P1 P2P2 P3P3 P4P4 P5P5 Shared Memory P1P1 P2P2 P3P3 P4P4 P5P5 Memory master slaves producerconsumer producerconsumer work queue

Where the rubber meets the road Concurrency is difficult to reason about Concurrency is even more difficult to reason about –At the scale of datacenters (even across datacenters) –In the presence of failures –In terms of multiple interacting services Not to mention debugging… The reality: –Lots of one-off solutions, custom code –Write you own dedicated library, then program with it –Burden on the programmer to explicitly manage everything

Source: Wikipedia (Flat Tire)

Source: MIT Open Courseware

Source: Harper’s (Feb, 2008)

What’s the point? It’s all about the right level of abstraction –The von Neumann architecture has served us well, but is no longer appropriate for the multi-core/cluster environment Hide system-level details from the developers –No more race conditions, lock contention, etc. Separating the what from how –Developer specifies the computation that needs to be performed –Execution framework (“runtime”) handles actual execution The datacenter is the computer!

“Big Ideas” Scale “out”, not “up” –Limits of SMP and large shared-memory machines Move processing to the data –Cluster have limited bandwidth Process data sequentially, avoid random access –Seeks are expensive, disk throughput is reasonable Seamless scalability –From the mythical man-month to the tradable machine-hour

MapReduce

ggggg fffff Map Fold Roots in Functional Programming

Typical Large-Data Problem Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Key idea: provide a functional abstraction for these two operations Map Reduce (Dean and Ghemawat, OSDI 2004) 19

MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’) → * All values with the same key are sent to the same reducer The execution framework handles everything else… 20

map Shuffle and Sort: aggregate values by keys reduce k1k1 k2k2 k3k3 k4k4 k5k5 k6k6 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 ba12cc36ac52bc78 a15b27c2368 r1r1 s1s1 r2r2 s2s2 r3r3 s3s3 21

MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’) → * All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”? 22

MapReduce “Runtime” Handles scheduling Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS (later) 23

MapReduce Programmers specify two functions: map (k, v) → * reduce (k’, v’) → * All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’) → * Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic 24

combine ba12c9ac52bc78 partition map k1k1 k2k2 k3k3 k4k4 k5k5 k6k6 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6 ba12cc36ac52bc78 Shuffle and Sort: aggregate values by keys reduce a15b27c298 r1r1 s1s1 r2r2 s2s2 r3r3 s3s3 c

Two more details… Barrier between map and reduce phases But we can begin copying intermediate data earlier Keys arrive at each reducer in sorted order No enforced ordering across reducers 26

“Hello World”: Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator values): int sum = 0; for each v in values: sum += v; Emit(term, sum); 27

MapReduce can refer to… The programming model The execution framework (aka “runtime”) The specific implementation Usage is usually clear from context! 28

MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java An Apache project Large contribution of development led by Yahoo, used in production Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc. 29

split 0 split 1 split 2 split 3 split 4 worker Master User Program output file 0 output file 1 (1) submit (2) schedule map(2) schedule reduce (3) read (4) local write (5) remote read (6) write Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004) 30

How do we get data to the workers? Compute Nodes NAS SAN What’s the problem here? 31

Distributed File System Don’t move data to workers… move workers to the data! Store data on the local disks of nodes in the cluster Start up the workers on the node that has the data local Why? Not enough RAM to hold all the data in memory Disk access is slow, but disk throughput is reasonable A distributed file system is the answer GFS (Google File System) for Google’s MapReduce HDFS (Hadoop Distributed File System) for Hadoop 32

GFS: Assumptions Commodity hardware over “exotic” hardware Scale “out”, not “up” High component failure rates Inexpensive commodity components fail all the time “Modest” number of huge files Multi-gigabyte files are common, if not encouraged Files are write-once, mostly appended to Perhaps concurrently Large streaming reads over random access High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003) 33

GFS: Design Decisions Files stored as chunks Fixed size (64MB) Reliability through replication Each chunk replicated across 3+ chunkservers Single master to coordinate access, keep metadata Simple centralized management No data caching Little benefit due to large datasets, streaming reads Simplify the API Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas) 34

From GFS to HDFS Terminology differences: GFS master = Hadoop namenode GFS chunkservers = Hadoop datanodes Functional differences: No file appends in HDFS (planned feature) HDFS performance is (likely) slower For the most part, we’ll use the Hadoop terminology… 35

Adapted from (Ghemawat et al., SOSP 2003) (file name, block id) (block id, block location) instructions to datanode datanode state (block id, byte range) block data HDFS namenode HDFS datanode Linux file system … HDFS datanode Linux file system … File namespace /foo/bar block 3df2 Application HDFS Client HDFS Architecture 36

Namenode Responsibilities Managing the file system namespace: Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. Coordinating file operations: Directs clients to datanodes for reads and writes No data is moved through the namenode Maintaining overall health: Periodic communication with the datanodes Block re-replication and rebalancing Garbage collection 37

Putting everything together… datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node datanode daemon Linux file system … tasktracker slave node namenode namenode daemon job submission node jobtracker 38

References [Lin]Ch2:Mapreduce Basic [Tom]Ch6:How mapreduce works 2003 "The Google file system," in sosp. Bolton Landing, NY, USA: ACM Press, "MapReduce: Simplified Data Processing on Large Clusters," in osdi, 2004, 39