Spark Streaming Large-scale near-real-time stream processing

Slides:



Advertisements
Similar presentations
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Cliff Engle, Michael Franklin, Haoyuan Li, Antonio Lupher, Justin Ma,
Advertisements

Spark Streaming Large-scale near-real-time stream processing
Spark Streaming Real-time big-data processing
MapReduce Online Tyson Condie UC Berkeley Slides by Kaixiang MO
Spark Streaming Large-scale near-real-time stream processing
Big Data Open Source Software and Projects ABDS in Summary XIV: Level 14B I590 Data Science Curriculum August Geoffrey Fox
Making Fly Parviz Deyhim
Mapreduce and Hadoop Introduce Mapreduce and Hadoop
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
MapReduce Online Veli Hasanov Fatih University.
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Spark Streaming Large-scale near-real-time stream processing UC BERKELEY Tathagata Das (TD)
Discretized Streams: Fault-Tolerant Streaming Computation at Scale
Spark Lightning-Fast Cluster Computing UC BERKELEY.
Matei Zaharia University of California, Berkeley Spark in Action Fast Big Data Analytics using Scala UC BERKELEY.
UC Berkeley Spark Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica.
Berkeley Data Analytics Stack (BDAS) Overview Ion Stoica UC Berkeley UC BERKELEY.
Lecture 18-1 Lecture 17-1 Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013 Hilfi Alkaff November 5, 2013 Lecture 21 Stream Processing.
Spark: Cluster Computing with Working Sets
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Discretized Streams: Fault-Tolerant Streaming Computation at Scale Wenting Wang 1.
Spark Fast, Interactive, Language-Integrated Cluster Computing.
Introduction to Spark Shannon Quinn (with thanks to Paco Nathan and Databricks)
Discretized Streams Fault-Tolerant Streaming Computation at Scale Matei Zaharia, Tathagata Das (TD), Haoyuan (HY) Li, Timothy Hunter, Scott Shenker, Ion.
Discretized Streams An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters Matei Zaharia, Tathagata Das, Haoyuan Li, Scott Shenker,
Berkley Data Analysis Stack (BDAS)
Shark Cliff Engle, Antonio Lupher, Reynold Xin, Matei Zaharia, Michael Franklin, Ion Stoica, Scott Shenker Hive on Spark.
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Piccolo – Paper Discussion Big Data Reading Group 9/20/2010.
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
Mesos A Platform for Fine-Grained Resource Sharing in Data Centers Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy.
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica Spark Fast, Interactive,
Outline | Motivation| Design | Results| Status| Future
Google MapReduce Simplified Data Processing on Large Clusters Jeff Dean, Sanjay Ghemawat Google, Inc. Presented by Conroy Whitney 4 th year CS – Web Development.
MapReduce: Hadoop Implementation. Outline MapReduce overview Applications of MapReduce Hadoop overview.
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Mesos A Platform for Fine-Grained Resource Sharing in the Data Center Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony Joseph, Randy.
Introduction to Hadoop Programming Bryon Gill, Pittsburgh Supercomputing Center.
Spark Streaming Large-scale near-real-time stream processing
Grid Computing at Yahoo! Sameer Paranjpye Mahadev Konar Yahoo!
Resilient Distributed Datasets: A Fault- Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Matei Zaharia, in collaboration with Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Haoyuan Li, Justin Ma, Murphy McCauley, Joshua Rosen, Reynold Xin,
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Spark System Background Matei Zaharia  [June HotCloud ]  Spark: Cluster Computing with Working Sets  [April NSDI.
Big Data Infrastructure Week 12: Real-Time Data Analytics (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Apache Spark Osman AIDEL.
Gorilla: A Fast, Scalable, In-Memory Time Series Database
Massive Data Processing – In-Memory Computing & Spark Stream Process.
Resilient Distributed Datasets A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
PROTECT | OPTIMIZE | TRANSFORM
Fast, Interactive, Language-Integrated Cluster Computing
Introduction to Spark Streaming for Real Time data analysis
ITCS-3190.
Some slides borrowed from the authors
Applying Control Theory to Stream Processing Systems
Spark Presentation.
Apache Spark Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Aditya Waghaye October 3, 2016 CS848 – University.
CSCI5570 Large Scale Data Processing Systems
COS 518: Advanced Computer Systems Lecture 11 Michael Freedman
Ministry of Higher Education
Introduction to Spark.
湖南大学-信息科学与工程学院-计算机与科学系
CS110: Discussion about Spark
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Zaharia, et al (2012)
Interpret the execution mode of SQL query in F1 Query paper
Spark and Scala.
Introduction to Spark.
Fast, Interactive, Language-Integrated Cluster Computing
Streaming data processing using Spark
COS 518: Advanced Computer Systems Lecture 12 Michael Freedman
Presentation transcript:

Spark Streaming Large-scale near-real-time stream processing Tathagata Das (TD) along with Matei Zaharia, Haoyuan Li, Timothy Hunter, Scott Shenker, Ion Stoica, and many others UC BERKELEY

What is Spark Streaming? Extends Spark for doing large scale stream processing Scales to 100s of nodes and achieves second scale latencies Efficient and fault-tolerant stateful stream processing Simple batch-like API for implementing complex algorithms

Motivation Many important applications must process large streams of live data and provide results in near-real-time Social network trends Website statistics Ad impressions … Distributed stream processing framework is required to Scale to large clusters (100s of machines) Achieve low latency (few seconds)

Integration with Batch Processing Many environments require processing same data in live streaming as well as batch post processing Existing framework cannot do both Either do stream processing of 100s of MB/s with low latency Or do batch processing of TBs / PBs of data with high latency Extremely painful to maintain two different stacks Different programming models Double the implementation effort Double the number of bugs

Stateful Stream Processing Traditional streaming systems have a record-at-a-time processing model Each node has mutable state For each record, update state and send new records mutable state node 1 node 3 input records node 2 State is lost if node dies! Making stateful stream processing be fault- tolerant is challenging Traditional streaming systems have what we call a “record-at-a-time” processing model. Each node in the cluster processing a stream has a mutable state. As records arrive one at a time, the mutable state is updated, and a new generated record is pushed to downstream nodes. Now making this mutable state fault-tolerant is hard.

Existing Streaming Systems Storm Replays record if not processed by a node Processes each record at least once May update mutable state twice! Mutable state can be lost due to failure! Trident – Use transactions to update state Processes each record exactly once Per state transaction to external database is slow

Spark Streaming

Discretized Stream Processing Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Chop up the live stream into batches of X seconds Spark treats each batch of data as RDDs and processes them using RDD operations Finally, the processed results of the RDD operations are returned in batches batches of X seconds Spark processed results

Discretized Stream Processing Run a streaming computation as a series of very small, deterministic batch jobs live data stream Spark Streaming Batch sizes as low as ½ second, latency of about 1 second Potential for combining batch processing and streaming processing in the same system batches of X seconds Spark processed results

Example – Get hashtags from Twitter val tweets = ssc.twitterStream() DStream: a sequence of RDDs representing a stream of data batch @ t+1 batch @ t batch @ t+2 Twitter Streaming API tweets DStream stored in memory as an RDD (immutable, distributed)

Example – Get hashtags from Twitter val tweets = ssc.twitterStream() val hashTags = tweets.flatMap (status => getTags(status)) new DStream transformation: modify data in one DStream to create another DStream batch @ t+1 batch @ t batch @ t+2 tweets DStream flatMap flatMap flatMap … hashTags Dstream [#cat, #dog, … ] new RDDs created for every batch

Example – Get hashtags from Twitter val tweets = ssc.twitterStream() val hashTags = tweets.flatMap (status => getTags(status)) hashTags.saveAsHadoopFiles("hdfs://...") output operation: to push data to external storage batch @ t batch @ t+1 batch @ t+2 tweets DStream flatMap flatMap flatMap hashTags DStream save every batch saved to HDFS

Example – Get hashtags from Twitter val tweets = ssc.twitterStream() val hashTags = tweets.flatMap (status => getTags(status)) hashTags.foreach(hashTagRDD => { ... }) foreach: do whatever you want with the processed data batch @ t batch @ t+1 batch @ t+2 tweets DStream flatMap flatMap flatMap hashTags DStream foreach foreach foreach Write to database, update analytics UI, do whatever you want

Java Example Scala Java Function object val tweets = ssc.twitterStream() val hashTags = tweets.flatMap (status => getTags(status)) hashTags.saveAsHadoopFiles("hdfs://...") Java JavaDStream<Status> tweets = ssc.twitterStream() JavaDstream<String> hashTags = tweets.flatMap(new Function<...> { }) Function object

Window-based Transformations val tweets = ssc.twitterStream() val hashTags = tweets.flatMap (status => getTags(status)) val tagCounts = hashTags.window(Minutes(1), Seconds(5)).countByValue() sliding window operation window length sliding interval window length DStream of data sliding interval

Arbitrary Stateful Computations Specify function to generate new state based on previous state and new data Example: Maintain per-user mood as state, and update it with their tweets updateMood(newTweets, lastMood) => newMood moods = tweets.updateStateByKey(updateMood _)

Arbitrary Combinations of Batch and Streaming Computations Inter-mix RDD and DStream operations! Example: Join incoming tweets with a spam HDFS file to filter out bad tweets tweets.transform(tweetsRDD => { tweetsRDD.join(spamHDFSFile).filter(...) })

DStream Input Sources Out of the box we provide Kafka HDFS Flume Akka Actors Raw TCP sockets Very easy to write a receiver for your own data source

Fault-tolerance: Worker RDDs remember the operations that created them Batches of input data are replicated in memory for fault-tolerance Data lost due to worker failure, can be recomputed from replicated input data tweets RDD input data replicated in memory flatMap hashTags RDD lost partitions recomputed on other workers All transformed data is fault-tolerant, and exactly-once transformations

Fault-tolerance: Master Master saves the state of the DStreams to a checkpoint file Checkpoint file saved to HDFS periodically If master fails, it can be restarted using the checkpoint file More information in the Spark Streaming guide Link later in the presentation Automated master fault recovery coming soon

Performance Can process 6 GB/sec (60M records/sec) of data on 100 nodes at sub-second latency Tested with 100 text streams on 100 EC2 instances with 4 cores each

Comparison with Storm and S4 Higher throughput than Storm Spark Streaming: 670k records/second/node Storm: 115k records/second/node Apache S4: 7.5k records/second/node Streaming Spark offers similar speed while providing FT and consistency guarantees that these systems lack

Fast Fault Recovery Recovers from faults/stragglers within 1 sec

Real Applications: Mobile Millennium Project Traffic transit time estimation using online machine learning on GPS observations Markov chain Monte Carlo simulations on GPS observations Very CPU intensive, requires dozens of machines for useful computation Scales linearly with cluster size

Real Applications: Conviva Real-time monitoring and optimization of video metadata Aggregation of performance data from millions of active video sessions across thousands of metrics Multiple stages of aggregation Successfully ported to run on Spark Streaming Scales linearly with cluster size

Unifying Batch and Stream Processing Models Spark program on Twitter log file using RDDs val tweets = sc.hadoopFile("hdfs://...") val hashTags = tweets.flatMap (status => getTags(status)) hashTags.saveAsHadoopFile("hdfs://...") Spark Streaming program on Twitter stream using DStreams val tweets = ssc.twitterStream() hashTags.saveAsHadoopFiles("hdfs://...")

Vision - one stack to rule them all Explore data interactively using Spark Shell to identify problems Use same code in Spark stand- alone programs to identify problems in production logs Use similar code in Spark Streaming to identify problems in live log streams $ ./spark-shell scala> val file = sc.hadoopFile(“smallLogs”) ... scala> val filtered = file.filter(_.contains(“ERROR”)) scala> val mapped = filtered.map(...) object ProcessProductionData { def main(args: Array[String]) { val sc = new SparkContext(...) val file = sc.hadoopFile(“productionLogs”) val filtered = file.filter(_.contains(“ERROR”)) val mapped = filtered.map(...) ... } object ProcessLiveStream { def main(args: Array[String]) { val sc = new StreamingContext(...) val stream = sc.kafkaStream(...) val filtered = file.filter(_.contains(“ERROR”)) val mapped = filtered.map(...) ... }

Vision - one stack to rule them all Ad-hoc Queries Batch Processing Stream Processing Spark + Shark Streaming

Today’s Tutorial Process Twitter data stream to find most popular hashtags Requires a Twitter account Need to setup Twitter OAuth keys All the instructions in the tutorial Your account is safe! No need to enter your password anywhere, only enter the keys in configuration file Destroy the keys after the tutorial is done

Thank you! Conclusion Integrated with Spark as an extension Takes 5 minutes to spin up a Spark cluster to try it out Streaming programming guide – http://spark.incubator.apache.org/docs/latest/streaming-programming- guide.html Paper – tinyurl.com/dstreams Thank you!