Discretized Streams An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters Matei Zaharia, Tathagata Das, Haoyuan Li, Scott Shenker,

Slides:



Advertisements
Similar presentations
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Cliff Engle, Michael Franklin, Haoyuan Li, Antonio Lupher, Justin Ma,
Advertisements

Spark Streaming Large-scale near-real-time stream processing
Spark Streaming Large-scale near-real-time stream processing
Spark Streaming Real-time big-data processing
MapReduce Online Tyson Condie UC Berkeley Slides by Kaixiang MO
Spark Streaming Large-scale near-real-time stream processing
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
MapReduce Online Veli Hasanov Fatih University.
Spark Streaming Large-scale near-real-time stream processing UC BERKELEY Tathagata Das (TD)
Discretized Streams: Fault-Tolerant Streaming Computation at Scale
UC Berkeley Job Scheduling for MapReduce Matei Zaharia, Dhruba Borthakur *, Joydeep Sen Sarma *, Scott Shenker, Ion Stoica 1 RAD Lab, * Facebook Inc.
Matei Zaharia University of California, Berkeley Spark in Action Fast Big Data Analytics using Scala UC BERKELEY.
UC Berkeley Spark Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Berkeley Data Analytics Stack (BDAS) Overview Ion Stoica UC Berkeley UC BERKELEY.
Lecture 18-1 Lecture 17-1 Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013 Hilfi Alkaff November 5, 2013 Lecture 21 Stream Processing.
Spark: Cluster Computing with Working Sets
Spark Fast, Interactive, Language-Integrated Cluster Computing Wen Zhiguang
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
In-Memory Frameworks (and Stream Processing) Aditya Akella.
Discretized Streams: Fault-Tolerant Streaming Computation at Scale Wenting Wang 1.
Introduction to Spark Shannon Quinn (with thanks to Paco Nathan and Databricks)
Matei Zaharia Large-Scale Matrix Operations Using a Data Flow Engine.
Discretized Streams Fault-Tolerant Streaming Computation at Scale Matei Zaharia, Tathagata Das (TD), Haoyuan (HY) Li, Timothy Hunter, Scott Shenker, Ion.
Berkley Data Analysis Stack (BDAS)
Shark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Hive on Spark.
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
Adaptive Stream Processing using Dynamic Batch Sizing Tathagata Das, Yuan Zhong, Ion Stoica, Scott Shenker.
UC Berkeley Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony Joseph, Randy Katz, Ion Stoica University.
Outline | Motivation| Design | Results| Status| Future
Hadoop/MapReduce Computing Paradigm 1 Shirish Agale.
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Tachyon: memory-speed data sharing Haoyuan (HY) Li, Ali Ghodsi, Matei Zaharia, Scott Shenker, Ion Stoica Good morning everyone. My name is Haoyuan,
Resilient Distributed Datasets (NSDI 2012) A Fault-Tolerant Abstraction for In-Memory Cluster Computing Piccolo (OSDI 2010) Building Fast, Distributed.
Mesos A Platform for Fine-Grained Resource Sharing in the Data Center Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony Joseph, Randy.
Introduction to Hadoop Programming Bryon Gill, Pittsburgh Supercomputing Center.
Spark Streaming Large-scale near-real-time stream processing
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Integrating Scale Out and Fault Tolerance in Stream Processing using Operator State Management Author: Raul Castro Fernandez, Matteo Migliavacca, et al.
Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Haoyuan Li, Justin Ma, Murphy McCauley, Joshua Rosen, Reynold Xin,
Spark Debugger Ankur Dave, Matei Zaharia, Murphy McCauley, Scott Shenker, Ion Stoica UC BERKELEY.
Spark System Background Matei Zaharia  [June HotCloud ]  Spark: Cluster Computing with Working Sets  [April NSDI.
Big Data Infrastructure Week 12: Real-Time Data Analytics (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
CSCI5570 Large Scale Data Processing Systems Distributed Data Analytics Systems Slide Ack.: modified based on the slides from Matei Zaharia James Cheng.
Fast, Interactive, Language-Integrated Cluster Computing
Hadoop Aakash Kag What Why How 1.
Introduction to Spark Streaming for Real Time data analysis
Introduction to Distributed Platforms
Some slides borrowed from the authors
Spark Presentation.
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
CSCI5570 Large Scale Data Processing Systems
Introduction to Spark.
February 26th – Map/Reduce
Cse 344 May 4th – Map/Reduce.
CS110: Discussion about Spark
Apache Spark Lecture by: Faria Kalim (lead TA) CS425, UIUC
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Apache Spark Lecture by: Faria Kalim (lead TA) CS425 Fall 2018 UIUC
MAPREDUCE TYPES, FORMATS AND FEATURES
Apache Hadoop and Spark
Fast, Interactive, Language-Integrated Cluster Computing
Streaming data processing using Spark
Lecture 29: Distributed Systems
CS639: Data Management for Data Science
Presentation transcript:

Discretized Streams An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters Matei Zaharia, Tathagata Das, Haoyuan Li, Scott Shenker, Ion Stoica UC BERKELEY

Motivation Many important applications need to process large data streams arriving in real time – User activity statistics (e.g. Facebook’s Puma) – Spam detection – Traffic estimation – Network intrusion detection Our target: large-scale apps that must run on tens-hundreds of nodes with O(1 sec) latency

Challenge To run at large scale, system has to be both: – Fault-tolerant: recover quickly from failures and stragglers – Cost-efficient: do not require significant hardware beyond that needed for basic processing Existing streaming systems don’t have both properties

Traditional Streaming Systems “Record-at-a-time” processing model – Each node has mutable state – For each record, update state & send new records mutable state node 1 node 3 input records push node 2 input records

Traditional Streaming Systems Fault tolerance via replication or upstream backup: node 1 node 3 node 2 node 1’ node 3’ node 2’ synchronization node 1 node 3 node 2 standby input

Traditional Streaming Systems Fault tolerance via replication or upstream backup: node 1 node 3 node 2 node 1’ node 3’ node 2’ synchronization node 1 node 3 node 2 standby input Fast recovery, but 2x hardware cost Only need 1 standby, but slow to recover

Traditional Streaming Systems Fault tolerance via replication or upstream backup: node 1 node 3 node 2 node 1’ node 3’ node 2’ synchronization node 1 node 3 node 2 standby input Neither approach tolerates stragglers

Observation Batch processing models for clusters (e.g. MapReduce) provide fault tolerance efficiently – Divide job into deterministic tasks – Rerun failed/slow tasks in parallel on other nodes Idea: run a streaming computation as a series of very small, deterministic batches – Same recovery schemes at much smaller timescale – Work to make batch size as small as possible

Discretized Stream Processing t = 1: t = 2: stream 1 stream 2 batch operation pull input …… immutable dataset (stored reliably) immutable dataset (output or state); stored in memory without replication immutable dataset (output or state); stored in memory without replication …

Parallel Recovery Checkpoint state datasets periodically If a node fails/straggles, recompute its dataset partitions in parallel on other nodes map input dataset Faster recovery than upstream backup, without the cost of replication output dataset

How Fast Can It Go? Prototype built on the Spark in-memory computing engine can process 2 GB/s (20M records/s) of data on 50 nodes at sub-second latency Max throughput within a given latency bound (1 or 2s)

How Fast Can It Go? Recovers from failures within 1 second Sliding WordCount on 10 nodes with 30s checkpoint interval

Programming Model A discretized stream (D-stream) is a sequence of immutable, partitioned datasets – Specifically, resilient distributed datasets (RDDs), the storage abstraction in Spark Deterministic transformations operators produce new streams

API LINQ-like language-integrated API in Scala New “stateful” operators for windowing pageViews = readStream("...", "1s") ones = pageViews.map(ev => (ev.url, 1)) counts = ones.runningReduce(_ + _) t = 1: t = 2: pageViewsonescounts mapreduce... = RDD = partition Scala function literal sliding = ones.reduceByWindow( “5s”, _ + _, _ - _) Incremental version with “add” and “subtract” functions

Other Benefits of Discretized Streams Consistency: each record is processed atomically Unification with batch processing: – Combining streams with historical data pageViews.join(historicCounts).map(...) – Interactive ad-hoc queries on stream state pageViews.slice(“21:00”, “21:05”).topK(10)

Conclusion D-Streams forgo traditional streaming wisdom by batching data in small timesteps Enable efficient, new parallel recovery scheme Let users seamlessly intermix streaming, batch and interactive queries

Related Work Bulk incremental processing (CBP, Comet) – Periodic (~5 min) batch jobs on Hadoop/Dryad – On-disk, replicated FS for storage instead of RDDs Hadoop Online – Does not recover stateful ops or allow multi-stage jobs Streaming databases – Record-at-a-time processing, generally replication for FT Parallel recovery (MapReduce, GFS, RAMCloud, etc) – Hwang et al [ICDE’07] have a parallel recovery protocol for streams, but only allow 1 failure & do not handle stragglers

Timing Considerations D-streams group input into intervals based on when records arrive at the system For apps that need to group by an “external” time and tolerate network delays, support: – Slack time: delay starting a batch for a short fixed time to give records a chance to arrive – Application-level correction: e.g. give a result for time t at time t+1, then use later records to update incrementally at time t+5

D-Streams vs. Traditional Streaming ConcernDiscretized StreamsRecord-at-a-time Systems Latency0.5–2s1-100 ms ConsistencyYes, batch-level Not in msg. passing systems; some DBs use waiting FailuresParallel recoveryReplication or upstream bkp. StragglersSpeculationTypically not handled Unification with batch Ad-hoc queries from Spark shell, join w. RDD Not in msg. passing systems; in some DBs