Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 2011 UKSim 5th European Symposium on Computer Modeling and Simulation Speker : Hong-Ji.

Similar presentations


Presentation on theme: "Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 2011 UKSim 5th European Symposium on Computer Modeling and Simulation Speker : Hong-Ji."— Presentation transcript:

1 Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 2011 UKSim 5th European Symposium on Computer Modeling and Simulation Speker : Hong-Ji Wei

2 Outline 1.Introduction 2.Literature Survey 3.Our Proposed Architecture 4.Advantages,Features And Applications 5.Conclusions And Future Work

3 1.Introduction Cloud Map Reduce (CMR) is gaining popularity among small companies for processing large data sets in cloud environments. The current implementation of CMR is designed for batch processing of data. For processing streaming data in Cloud MapReduce (CMR), significant changes are required to be made to the existing CMR architecture.

4 1.Introduction We use pipelining between Map and Reduce phases as an approach to support stream data processing. In contrast to the current implementation where the Reducers do not start working unless all Mappers have finished. In our architecture, the Reduce phase too gets a cont- inuous stream of data and can produce continuous o- utput.

5 2.Literature Survey MapReduce is a programming model developed by Google for processing large data sets in a dis- tributed fashion. The model consists of two phases: Map phase and Reduce phase. Cake Pastry (Cake,1) (Pastry,1) Map Task File Cake Pastry (Cake,1) (Pastry,1) (Cake,1) (Pastry,1) Reduce Task (Cake,2) (Pastry,3) Result

6 2.Literature Survey Hadoop is an implementation of the MapReduce programming model developed by Apache. It incorporates a distributed file system called HDFS. Hadoop is popular for processing huge data sets especially in social networking, targeted advert- isements, internet log processing etc.

7 2.Literature Survey Cloud MapReduce is a light-weight implememtat- ion of MapReduce programming model on top of the Amazon cloud OS. The architecture of CMR consists of one input qu- eue and multiple reduce queues master reduce queue that holds the pointers to the reduce queues, and an output queue that holds the final results. S3 file system is used to store the data to be proc- essed, and SimpleDB is used to communicate the status of the worker nodes.

8 2.Literature Survey Hadoop Online Prototype(HOP) is a modification to traditional Hadoop framework that incorporates pipelining between the Map and Reduce phases. The downstream dataflow element can begin processing before an upstream producer finishes. HOP providing support for processing streaming data. It also supports snapshots of output data.

9 3.Our Proposed Architecture The HOP implementations suffer from the following drawbacks. 1. HOP is unsuitable for cloud, it lacks the inherent scalability and flexibility of cloud. 2. In HOP, code for handling of HDFS, reliability, scheduling etc. is a part of the Hadoop framework itself, and hence makes it large and heavy-weight. 3. Cloud MapReduce does not support stream data processing.

10 3.Our Proposed Architecture Our proposal aims at bridging this gap between heavy-weight HOP and the light-weight, scalable Cloud MapReduce implementation, by providing support for processing stream data in CMR. The challenges involved in the implementation include: 1. Providing support for streaming data at input. 2. A novel design for output aggregation. 3. Handling Reducer failures. 4. Handling windows based on timestamps.

11 3.Our Proposed Architecture Currently, no open-source implementation exists for processing streaming data using MapReduce on top of Cloud. The best of our knowledge is to integrate stream data processing capability with MapReduce on Amazon Web Services using EC2. We now describe the architecture of the Pipelined CMR approach. The approach includes three sections: Input, Mapper Operation, Reduce Phase

12 3.1. Input A drop-box concept can be used, where a folder on S3 is used to hold the data that is to be processed by Cloud MapReduce. The user is responsible for providing data in the drop-box from which it will be sent to the input SQS queue MetadataKey i Value i

13 3.2. Mapper Operation The Mapper, whenever it is free, pops one message from the input SQS queue thereby removing the message from the queue for a visibility timeout and processes it according to the user-defined Map function.

14 3.3. Reduce Phase The Mapper writes the intermediate records produced to ReduceQueues. ReduceQueues are intermediate staging queues implemented using SQS for holding the mapper output.

15 3.4 Handling Reducer Failures The Status field is used for handling Reducer failures. Status can be one of Live, Dead, Idle. If reducer has not updated its status, its sets its Status to Dead If reducer’s status is Idle, assigns the new Reducer the Reduce Queue Pointers and Output Pointers previously held by the old Reducer.

16 4.Advantages,Features And Applications The design has the following advantages: 1. This allows parallelism between the Map and Reduce phases. 2. A downstream processing element can start processing as soon as some data is available from an upstream element. 3. The network is better utilized as data is continuously pushed from one phase to the next. 4. The final output is computed incrementally.

17 4.Advantages,Features And Applications 5. Introduction of a pipeline between the Reduce phase of one job and the Map phase of the next job will support Cascaded MapReduce jobs.

18 4.Advantages,Features And Applications Other Features of the design include: Time windows Snapshots Cascaded MapReduce Jobs

19 4.Advantages,Features And Applications Applications: With these features, the design is particularly suited to stream processing of data. Typically analysis and processing of web feeds click-streams, micro-blogging, and stock market quotes are some of the popular and typical stream processing applications. This design can also be used to process real-time data.

20 5.Conclusions And Future Work The design fulfills a real need of processing streaming data using MapReduce Further work can be done in maintaining inter- mediate output information for supporting rol- ling windows. Future scope also includes designing a generic system that is portable across several cloud operating systems.


Download ppt "Optimizing Cloud MapReduce for Processing Stream Data using Pipelining 2011 UKSim 5th European Symposium on Computer Modeling and Simulation Speker : Hong-Ji."

Similar presentations


Ads by Google