MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.

Slides:



Advertisements
Similar presentations
Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)
Advertisements

Starfish: A Self-tuning System for Big Data Analytics.
MapReduce Online Tyson Condie UC Berkeley Slides by Kaixiang MO
MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
MapReduce Simplified Data Processing on Large Clusters
 Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware  Created by Doug Cutting and.
MapReduce.
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
Based on the text by Jimmy Lin and Chris Dryer; and on the yahoo tutorial on mapreduce at index.html
MapReduce Online Veli Hasanov Fatih University.
Spark: Cluster Computing with Working Sets
MapReduce Online Tyson Condie and Neil Conway UC Berkeley Joint work with Peter Alvaro, Rusty Sears, Khaled Elmeleegy (Yahoo! Research), and Joe Hellerstein.
Distributed Computations
Homework 2 In the docs folder of your Berkeley DB, have a careful look at documentation on how to configure BDB in main memory. In the docs folder of your.
Distributed Computations MapReduce
CPS216: Advanced Database Systems (Data-intensive Computing Systems) How MapReduce Works (in Hadoop) Shivnath Babu.
L22: SC Report, Map Reduce November 23, Map Reduce What is MapReduce? Example computing environment How it works Fault Tolerance Debugging Performance.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc
Hadoop & Cheetah. Key words Cluster  data center – Lots of machines thousands Node  a server in a data center – Commodity device fails very easily Slot.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
SIDDHARTH MEHTA PURSUING MASTERS IN COMPUTER SCIENCE (FALL 2008) INTERESTS: SYSTEMS, WEB.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
MapReduce.
MapReduce. Web data sets can be very large – Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: – Cluster of.
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Süleyman Fatih GİRİŞ CONTENT 1. Introduction 2. Programming Model 2.1 Example 2.2 More Examples 3. Implementation 3.1 ExecutionOverview 3.2.
Map Reduce for data-intensive computing (Some of the content is adapted from the original authors’ talk at OSDI 04)
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
HAMS Technologies 1
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Introduction to Hadoop and HDFS
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
MAP REDUCE : SIMPLIFIED DATA PROCESSING ON LARGE CLUSTERS Presented by: Simarpreet Gill.
Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 作者 :Rutvik Karve , Devendra Dahiphale , Amit Chhajer 報告 : 饒展榕.
MapReduce How to painlessly process terabytes of data.
MapReduce M/R slides adapted from those of Jeff Dean’s.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
MapReduce : Simplified Data Processing on Large Clusters P 謝光昱 P 陳志豪 Operating Systems Design and Implementation 2004 Jeffrey Dean, Sanjay.
IBM Research ® © 2007 IBM Corporation Introduction to Map-Reduce and Join Processing.
C-Store: MapReduce Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY May. 22, 2009.
A N I N - MEMORY F RAMEWORK FOR E XTENDED M AP R EDUCE 2011 Third IEEE International Conference on Coud Computing Technology and Science.
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
MapReduce: Simplied Data Processing on Large Clusters Written By: Jeffrey Dean and Sanjay Ghemawat Presented By: Manoher Shatha & Naveen Kumar Ratkal.
Large-scale file systems and Map-Reduce
Hadoop MapReduce Framework
Introduction to MapReduce and Hadoop
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Ministry of Higher Education
MapReduce Simplied Data Processing on Large Clusters
COS 418: Distributed Systems Lecture 1 Mike Freedman
湖南大学-信息科学与工程学院-计算机与科学系
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
CS 345A Data Mining MapReduce This presentation has been altered.
Introduction to MapReduce
5/7/2019 Map Reduce Map reduce.
COS 518: Distributed Systems Lecture 11 Mike Freedman
MapReduce: Simplified Data Processing on Large Clusters
Lecture 29: Distributed Systems
Presentation transcript:

MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu

MapReduce Programming Model Programmers think in a data-centric fashion – Apply transformations to data sets The MR framework handles the Hard Stuff: – Fault tolerance – Distributed execution, scheduling, concurrency – Coordination – Network communication

MapReduce System Model Designed for batch-oriented computations over large data sets – Each operator runs to completion before producing any output – Operator output is written to stable storage Map output to local disk, reduce output to HDFS Simple, elegant fault tolerance model: operator restart – Critical for large clusters

Life Beyond Batch Processing Can we apply the MR programming model outside batch processing? Domains of interest: Interactive data analysis Enabled by high-level MR query languages, e.g. Hive, Pig, Jaql Batch processing is a poor fit Batch processing adds massive latency Requires saving and reloading analysis state

MapReduce Online Pipeline data between operators as it is produced Hadoop Online Prototype (HOP): Hadoop with pipelining support – Preserves the Hadoop interfaces and APIs – Challenge: to retain elegant fault tolerance model Reduces job response time Enables online aggregation and continuous queries

Functionalities Supported by HOP Reducers begin processing data as soon as it is produced by mappers, they can generate and refine an approximation of their final answer during the course of execution (online aggregation) HOP can be used to support continuous queries, where MapReduce jobs can run continuously, accepting new data as it arrives and analyzing it immediately. This allows MapReduce to be used for applications such as event monitoring and stream processing

Outline 1.Hadoop Background 2.HOP Architecture 3.Online Aggregation 4.Stream Processing 5.Conclusions

Hadoop Architecture Hadoop MapReduce – Single master node, many worker nodes – Client submits a job to master node – Master splits each job into tasks (map/reduce), and assigns tasks to worker nodes Hadoop Distributed File System (HDFS) – Single name node, many data nodes – Files stored as large, fixed-size (e.g. 64MB) blocks – HDFS typically holds map input and reduce output

Job Scheduling in Hadoop One map task for each block of the input file – Applies user-defined map function to each record in the block – Record = User-defined number of reduce tasks – Each reduce task is assigned a set of record groups, i.e., intermediate records corresponding to a group of keys – For each group, apply user-defined reduce function to the record values in that group Reduce tasks read from every map task – Each read returns the record groups for that reduce task

Map Task Execution 1.Map phase – Read the assigned input split from HDFS Split = file block by default – Parses input into records (key/value pairs) – Applies map function to each record Returns zero or more new records 2.Commit phase – Registers the final output with the worker node Stored in the local filesystem as a file Sorted first by bucket number then by key – Informs master node of its completion

Reduce Task Execution 1.Shuffle phase – Fetches input data from all map tasks The portion corresponding to the reduce task’s bucket 2.Sort phase – Merge-sort *all* map outputs into a single run 3.Reduce phase – Applies user-defined reduce function to the merged run Arguments: key and corresponding list of values – Write output to a temp file in HDFS Atomic rename when finished

Dataflow in Hadoop Map tasks write their output to local disk – Output available after map task has completed Reduce tasks write their output to HDFS – Once job is finished, next job’s map tasks can be scheduled, and will read input from HDFS Therefore, fault tolerance is simple: simply re- run tasks on failure – No consumers see partial operator output

Dataflow in Hadoop Submit job schedule map reduce

Dataflow in Hadoop HDFS Block 1 Block 2 map reduce Read Input File

Dataflow in Hadoop map reduce Local FS HTTP GET

Dataflow in Hadoop reduce HDFS Write Final Answer

Design Implications 1.Fault Tolerance – Tasks that fail are simply restarted – No further steps required since nothing left the task 2.“Straggler” handling – Job response time affected by slow task – Slow tasks get executed redundantly Take result from the first to finish Assumes slowdown is due to physical components (e.g., network, host machine) Pipelining can support both!

Hadoop Online Prototype (HOP)

Hadoop Online Prototype HOP supports pipelining within and between MapReduce jobs: push rather than pull – Preserves simple fault tolerance scheme – Improved job completion time (better cluster utilization) – Improved detection and handling of stragglers MapReduce programming model unchanged – Clients supply same job parameters Hadoop client interface backward compatible – Extended to take a series of jobs

Pipelining Batch Size Initial design: pipeline eagerly (for each row) – Moves more sorting work to reducer – Prevents use of combiner – Map function can block on network I/O Revised design: map writes into buffer – Spill thread: sort & combine buffer, spill to disk – Send thread: pipeline spill files => reducers

Fault Tolerance Fault tolerance in MR is simple and elegant – Simply recompute on failure, no state recovery Initial design for pipelining FT: – Reduce treats in-progress map output as tentative, that is: can merge together spill files generated by the same uncommitted mapper, but not combine those spill files with the output of other map tasks Revised design: – Pipelining maps periodically checkpoint output – Reducers can consume output <= checkpoint – Bonus: improved speculative execution

Fault Tolerance in HOP Traditional fault tolerance algorithms for pipelined dataflow systems are complex HOP approach: write to disk and pipeline – Producers write data into in-memory buffer – In-memory buffer periodically spilled to disk – Spills are also sent to consumers – Consumers treat pipelined data as “tentative” until producer is known to complete – Fault tolerance via task restart, tentative output discarded

Refinement: Checkpoints Problem: Treating output as tentative inhibits parallelism Solution: Producers periodically “checkpoint” with Hadoop master node – “Output split x corresponds to input offset y” – Pipelined data <= split x is now non-tentative – Also improves speculation for straggler tasks, reduces redundant work on task failure

Online Aggregation Traditional MR: poor UI for data analysis Pipelining means that data is available at consumers “early” – Can be used to compute and refine an approximate answer – Often sufficient for interactive data analysis, developing new MapReduce jobs,... Within a single job: periodically invoke reduce function at each reduce task on available data Between jobs: periodically send a “snapshot” to consumer jobs

Online Aggregation in HOP HDFS Write Snapshot Answer HDFS Block 1 Block 2 Read Input File map reduce

Inter-Job Online Aggregation Like intra-job OA, but approximate answers are pipelined to map tasks of next job – Requires co-scheduling a sequence of jobs Consumer job computes an approximation – Can be used to feed an arbitrary chain of consumer jobs with approximate answers

Inter-Job Online Aggregation Write Answer HDFS map Job 2 Mappers reduce Job 1 Reducers

Example Scenario Top K most-frequent-words in 5.5GB Wikipedia corpus (implemented as 2 MR jobs) 60 node EC2 cluster

Fault Tolerance For instance: j1-reducer & j2-map – As new snapshots produced by j1, j2 re-computes from scratch using the new snapshot; – Tasks that fail in j1 recover as discussed earlier; – If a task in j2 fails, the system simply restarts the failed task. The next snapshot received by the restarted reduce task in j2 will always have a higher progress score than that received by the failed task; – To handle failures in j1, tasks in j2 cache the most recent snapshot received from j1 and replace it when new one comes; – If tasks from both jobs fail, a new task in j2 recovers the most recent snapshot from j1.

Stream Processing MapReduce is often applied to streams of data that arrive continuously – Click streams, network traffic, web crawl data, … Traditional approach: buffer, batch process 1.Poor latency 2.Analysis state must be reloaded for each batch Instead, run MR jobs continuously, and analyze data as it arrives

Monitoring The thrashing host was detected very rapidly—notably faster than the 5-second TaskTracker- JobTracker heartbeat cycle that is used to detect straggler tasks in stock Hadoop. We envision using these alerts to do early detection of stragglers within a MapReduce job.

Performance: Blocking 10 GB input file 20 map tasks, 5 reduce tasks

Performance: Pipelining 462 seconds vs. 561seconds

Other HOP Benefits Shorter job completion time via improved cluster utilization: reduce work starts early – Important for high-priority jobs, interactive jobs Adaptive load management – Better detection and handling of “straggler” tasks

Conclusions HOP extends the applicability of the model to pipelining behaviors, while preserving the simple programming model and fault tolerance of a full- featured MapReduce framework. Future topics -Scheduling -explore using MapReduce-style programming for even more interactive applications.