Presentation is loading. Please wait.

Presentation is loading. Please wait.

Map-Reduce Big Data, Map-Reduce, Apache Hadoop SoftUni Team Technical Trainers Software University

Similar presentations


Presentation on theme: "Map-Reduce Big Data, Map-Reduce, Apache Hadoop SoftUni Team Technical Trainers Software University"— Presentation transcript:

1 Map-Reduce Big Data, Map-Reduce, Apache Hadoop SoftUni Team Technical Trainers Software University http://softuni.bg

2 2 1.Big Data 2.Map-Reduce  What is Map-Reduce?  How It Works?  Mappers and Reducers  Examples 3.Apache Hadoop Table of Contents

3 Big Data What is Big Data Processing?

4 4  Big data == process very large data sets (terabytes / petabytes)  So large or complex, that traditional data processing is inadequate  Usually stored and processed by distributed databases  Often related to analysis of very large data sets  Typical components of big data systems  Distributed databases (like Cassandra, HBase and Hive)  Distributed processing frameworks (like Apache Hadoop)  Distributed processing systems (like Map-Reduce)  Distributed file systems (like HDFS) Big Data

5 Map-Reduce

6 6  Map-Reduce is a distributed processing framework  Computational model for processing huge data-sets (terabytes)  Using parallel processing on large clusters (thousands of nodes)  Relying on distributed infrastructure  Like Apache Hadoop or MongoDB cluster  The input and output data is stored in a distributed file system (or distributed database)  The framework takes care of scheduling, executing and monitoring tasks, and re-executes the failed tasks What is Map-Reduce?

7 7  How map-reduce works? 1. Splits the input data-set into independent chunks 2. Process each chunk by "map" tasks in parallel manner  The "map" function groups the input data into key-value pairs  Equal keys are processed by the same "reduce" node 3. Outputs of the "map" tasks are  Sorted, grouped by key, then sent as input to the "reduce" tasks 4. The "reduce" tasks  Aggregate the results per each key and produces the final output Map-Reduce: How It Works?

8 8  The map-reduce process is a sequence of transformations, executed on several nodes in parallel  Map: groups input chunks of data to key-value pairs  E.g. splits documents(id, chunk-content) into words(word, count)  Combine: sorts and combines all values by the same key  E.g. produce a list of counts for each word  Reduce: combines (aggregates) all values for certain key The Map-Reduce Process (input)  map   combine  >  reduce  (output)

9 9  We have a very large set of documents (e.g. 200 terabytes)  We want to count how many times each word occurs  Input: set of documents {key + content}  Mapper:  Extract the words from each document (words are used as keys)  Transforms documents {key + content}  word-count-pairs {word, count}  Reducer:  Sums the counts for each word  Transforms {word, list }  word-count-pairs {word, count} Example: Counting Words

10 10 Counting Words: Mapper and Reducer public void map(Object offset, Text docText, Context context) throws IOException, InterruptedException { throws IOException, InterruptedException { String[] words = docText.toString().toLowerCase().split("\\W+"); String[] words = docText.toString().toLowerCase().split("\\W+"); for (String word : words) for (String word : words) context.write(new Text(word), new IntWritable(1)); context.write(new Text(word), new IntWritable(1));} public void reduce(Text word, Iterable counts, Context context) throws IOException, InterruptedException { Context context) throws IOException, InterruptedException { int sum = 0; int sum = 0; for (IntWritable count : counts) for (IntWritable count : counts) sum += count.get(); sum += count.get(); context.write(word, new IntWritable(sum)); context.write(word, new IntWritable(sum));}

11 Word Count in Apache Hadoop Live Demo

12 12  We are given a CSV file holding real estate sales data:  Estate address, city, ZIP code, state, # beds, # baths, square foots, sale date price and GPS coordinates (latitude + longitude)  Find all cities that have sales in price range [100 000 … 200 000]  As side effect, find the sum of all sales by city Example: Extract Data from CSV Report

13 13 Process CSV Report – How It Works? SELECT city, city, SUM(price) FROM Sales SUM(price) FROM Sales GROUP BY city city chunking map citysum(price) SACRAMENTO625140 LINCOLN843620 RIO LINDA348500 reduce

14 14 Process CSV Report: Mapper and Reducer public void map(Object offset, Text inputCSVLine, Context context) throws IOException, InterruptedException { throws IOException, InterruptedException { String[] fields = inputCSVLine.toString().split(","); String[] fields = inputCSVLine.toString().split(","); String city = fields[1]; String city = fields[1]; int price = Integer.parseInt(fields[9]); int price = Integer.parseInt(fields[9]); if (price > 100000 && price 100000 && price < 200000) context.write(new Text(city), new LongWritable(price); context.write(new Text(city), new LongWritable(price);} public void reduce(Text city, Iterable prices, Context context) throws IOException, InterruptedException { Context context) throws IOException, InterruptedException { long sum = 0; long sum = 0; for (LongWritable val : prices) for (LongWritable val : prices) sum += val.get(); sum += val.get(); context.write(city, new LongWritable(sum)); context.write(city, new LongWritable(sum));}

15 Processing CSV Report in Apache Hadoop Live Demo

16 Apache Hadoop Distributed Processing Framework

17 17  Apache Hadoop project develops open-source software for reliable, scalable, distributed computing  Hadoop Distributed File System (HDFS) – a distributed file system that transparently moves data across Hadoop cluster nodes  Hadoop MapReduce – the map-reduce framework  HBase – a scalable, distributed database for large tables  Hive – SQL-like query for large datasets  Pig – a high-level data-flow language for parallel computation  Hadoop is driven by big players like IBM, Microsoft, Facebook, VMware, LinkedIn, Yahoo, Cloudera, Intel, Twitter, Hortonworks, … Apache Hadoop

18 18 Hadoop Ecosystem HDFS Storage Redundant (3 copies) For large files – large blocks 64 MB or 128 MB / block Can scale to 1000s of nodes MapReduce API Batch (Job) processing Distributed and localized to clusters Auto-parallelizable for huge amounts of data Fault-tolerant (auto retries) Adds high availability and more Hadoop Libraries Pig Hive HBase Others

19 19 Hadoop Cluster HDFS (Physical) Storage Name Node Data Node 1 Data Node 2 Data Node 3 Secondary Name Node Contains web site to view cluster information V2 Hadoop uses multiple Name Nodes for HA One Name Node 3 copies of each node by default Many Data Nodes Using common Linux shell commands Block size is 64 or 128 MB Work with data in HDFS

20  Tips:  sudo means "run as administrator" (super user)  Some distributions use hadoop dfs rather than hadoop fs Common Hadoop Shell Commands hadoop fs –cat file:///file2 hadoop fs –mkdir /user/hadoop/dir1 /user/hadoop/dir2 hadoop fs –copyFromLocal hadoop fs –copyFromLocal hadoop fs –put hdfs://nn.example.com/hadoopfile sudo hadoop jar sudo hadoop jar hadoop fs –ls /user/hadoop/dir1 hadoop fs –cat hdfs://nn1.example.com/file1 hadoop fs –get /user/hadoop/file hadoop fs –get /user/hadoop/file

21 Hadoop Shell Commands Live Demo

22 22  Apache Hadoop MapReduce  The world's leading implementation of the map-reduce computational model  Provides parallelized (scalable) computing  For processing very large data sets  Fault tolerant  Runs on commodity of hardware  Implemented in many cloud platforms: Amazon EMR, Azure HDInsight, Google Cloud, Cloudera, Rackspace, HP Cloud, …Amazon EMRAzure HDInsightGoogle CloudClouderaRackspaceHP Cloud Hadoop MapReduce

23 23 Hadoop Map-Reduce Pipeline

24 24  Download and install Java and Hadoop  http://hadoop.apache.org/releases.html http://hadoop.apache.org/releases.html  You will need to install Java first  Download a pre-installed Hadoop virtual machine (VM)  Hortonworks Sandbox Hortonworks Sandbox  Cloudera QuickStart VM Cloudera QuickStart VM  You can use Hadoop in the cloud / local emulator  E.g. Azure HDInsight EmulatorAzure HDInsight Emulator Hadoop: Getting Started

25 Playing with Apache Hadoop Live Demo

26 26  Big data == processing huge datasets that are too big for processing on a single machine  Use a cluster of computing nodes  Map-reduce == computational paradigm for parallel data processing of huge data-sets  Data is chunked, then mapped into groups, then groups are processed and the results are aggregated  Highly scalable, can process petabytes of data  Apache Hadoop – industry's leading Map-Reduce framework Summary

27 ? ? ? ? ? ? ? ? ? Map-Reduce https://softuni.bg/trainings/1194/Algorithms-September-2015

28 License  This course (slides, examples, labs, videos, homework, etc.) is licensed under the "Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International" licenseCreative Commons Attribution- NonCommercial-ShareAlike 4.0 International 28  Attribution: this work may contain portions from  "Fundamentals of Computer Programming with C#" book by Svetlin Nakov & Co. under CC-BY-SA licenseFundamentals of Computer Programming with C#CC-BY-SA  "Data Structures and Algorithms" course by Telerik Academy under CC-BY-NC-SA licenseData Structures and AlgorithmsCC-BY-NC-SA

29 Free Trainings @ Software University  Software University Foundation – softuni.orgsoftuni.org  Software University – High-Quality Education, Profession and Job for Software Developers  softuni.bg softuni.bg  Software University @ Facebook  facebook.com/SoftwareUniversity facebook.com/SoftwareUniversity  Software University @ YouTube  youtube.com/SoftwareUniversity youtube.com/SoftwareUniversity  Software University Forums – forum.softuni.bgforum.softuni.bg


Download ppt "Map-Reduce Big Data, Map-Reduce, Apache Hadoop SoftUni Team Technical Trainers Software University"

Similar presentations


Ads by Google