Presentation is loading. Please wait.

Presentation is loading. Please wait.

Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015.

Similar presentations


Presentation on theme: "Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015."— Presentation transcript:

1 Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015

2 A Lot of Data Google processes 20 PB a day (2008) Wayback Machine has 3 PB + 100 TB/month (03/2009) – 9.6 PB recently Facebook processes 500 TB/day (08/2012) eBay has > 10 PB of user data + 50 TB/day (01/2012) CERN Data Centre has over 100 PB of physics data. KB (kilobyte) = 10 3 bytes;MB (megabyte) = 10 6 bytes; GB (gigabyte) = 10 9 bytes;TB (terabyte) = 10 12 bytes; PB (petabyte) = 10 15 bytes

3 20+ billion web pages x 20KB = 400+ TB - one computer reads 30-35 MB/sec from disk, so it will take more than 4 months to read the web pages - 1,000 hard drives to store the web pages Not scalable: takes even more to do something useful with the data! A standard architecture for such problems has emerged - Cluster of commodity Linux nodes - Commodity network (ethernet) to connect them Google Example A Lot of Data

4 4 Cluster Architecture: Many Machines Switch 1 Gbps between nodes in a rack 2-10 Gbps backbone between racks …… …… Each rack has 16-64 nodes Google had 1 million machines in 2011.

5 Hadoop Cluster DN: data node TT: task tracker NN: name node From: http://bradhedlund.com/2011/09/10/understanding-hadoop-clusters-and-the-network/ Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. Cluster Architecture: Many Machines Hadoop Cluster

6 Cluster Computing: A Classical Algorithmic Ideas: Divide-and Conquer work 1 work partition work 2 work 3 work 4 “worker” result 1 result 2result 3result 4 result combine solve

7 Challenges in Cluster Computing

8 How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? Challenges in Cluster Computing

9 How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems? Challenges in Cluster Computing

10 How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems? Parallelization problems arise from: - Communication between workers (e.g., to exchange state) - Access to shared resources (e.g., data) We need a synchronization mechanism. Challenges in Cluster Computing

11 We need the right level of abstraction – new model more appropriate for the multicore/cluster environment Hide system-level details from the developers – no more race conditions, lock contention, etc. Separating the what from how – developer specifies the computation that needs to be performed – execution framework handles actual execution Therefore,

12 We need the right level of abstraction – new model more appropriate for the multicore/cluster environment Hide system-level details from the developers – no more race conditions, lock contention, etc. Separating the what from how – developer specifies the computation that needs to be performed – execution framework handles actual execution Therefore, This motivated MapReduce

13 MapReduce: Big Ideas

14 Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart) MapReduce: Big Ideas

15 Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart) Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality) MapReduce: Big Ideas

16 Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart) Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality) Disk I/O is time-consuming MapReduce organizes computation into long streaming operations MapReduce: Big Ideas

17 Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart) Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality) Disk I/O is time-consuming MapReduce organizes computation into long streaming operations Developing distributed software is difficult MapReduce isolates developers from implementation details. MapReduce: Big Ideas

18 Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Typical Large-Data Problem

19 Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Typical Large-Data Problem map

20 Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Typical Large-Data Problem map Reduce

21 Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Typical Large-Data Problem map Reduce Key idea of MapReduce: provide a functional abstraction for these two operations. [Dean and Ghemawat, OSDI 2004]

22 map input map MapReduce: Greneral Framework reduce …… output

23 Shuffle and Sort map Output written to DFS map MapReduce: Greneral Framework reduce …… input InputSplit

24 Shuffle and Sort map Output written to DFS map MapReduce: Greneral Framework reduce …… input InputSplit User specified System provided

25 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* All values with the same key are sent to the same reducer The execution framework handles everything else. MapReduce

26 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* All values with the same key are sent to the same reducer The execution framework handles everything else. MapReduce Example: Word Count Map(String docID, String text): map(docID, text) → (word, 1)* for each word w in text: Emit( w, 1) Reduce(String word, Iterator values): int sum = 0; for each v in values: reduce(word, [1, …, 1]) → (word, sum)* sum += v ; Emit(word, sum); k1k1 v1v1 k3k3 v3v3 k2k2 k2k2 v2*v2* v2v2

27 Shuffle and Sort: aggregate values by keys map Output written to DFS docIDtext map a 1 b 1 a 1 c 1 b 1 c 1 c 1 a 1 a 1 a 1 c 1 b 1 a 1 1 1 1 1 b 1 1 1 c 1 1 1 1 reduce a5 b3 c4 Example: Word Count Map(String docID, String text): for each word w in text: Emit( w, 1) Reduce(String word, Iterator values): int sum = 0; for each v in values: sum += v ; Emit(word, sum); MapReduce: Word Count

28 Handles scheduling – Assigns workers to map and reduce tasks Handles “data distribution” – Moves processes to data Handles synchronization – Gathers, sorts, and shuffles intermediate data Handles errors and faults – Detects worker failures and restarts Everything happens on top of a distributed file system MapReduce: Framework

29 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* – all values with the same key are sent to the same reducer MapReduce: User Specification

30 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* – all values with the same key are sent to the same reducer Mappers & Reducers can specify any computation – be careful with access to external resources! MapReduce: User Specification

31 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* – all values with the same key are sent to the same reducer Mappers & Reducers can specify any computation – be careful with access to external resources! The execution framework handles everything else MapReduce: User Specification

32 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* – all values with the same key are sent to the same reducer Mappers & Reducers can specify any computation – be careful with access to external resources! The execution framework handles everything else Not quite… often, programmers also specify: partition (k 2, number of partitions) → partition for k 2 – often a simple hash of the key, e.g., hash(k 2 ) mod n – divides up key space for parallel reduce operations MapReduce: User Specification

33 Programmers specify two functions: map (k 1, v 1 ) → (k 2, v 2 )* reduce (k 2, v 2 *) → (k 3, v 3 )* – all values with the same key are sent to the same reducer Mappers & Reducers can specify any computation – be careful with access to external resources! The execution framework handles everything else Not quite… often, programmers also specify: partition (k 2, number of partitions) → partition for k 2 – often a simple hash of the key, e.g., hash(k 2 ) mod n – divides up key space for parallel reduce operations combine (k 2, v 2 ) → (k 2 ’, v 2 ’) – mini-reducers that run in memory after the map phase – used as an optimization to reduce network traffic MapReduce: User Specification

34 map docIDtext map Shuffle and Sort: aggregate values by keys a 2 1 2 b 1 1 1 c 1 1 2 reduce a5 b3 c4 Example: Word Count Map(String docID, String text): for each word w in text: H[w] = H[w] + 1; for each word w in H Emit( w, H[w]) Reduce(String word, Iterator values): int sum = 0; for each v in values: sum += v ; Emit(word, sum); MapReduce: Word Count InputSplit a 1 b 1 a 1 c 1 b 1 c 1 c 1 a 1 a 1 a 1 c 1 b 1 combine a 2 b 1 a c 2 b 1 c c 1 a 2 a a 1 c 1 b 1 partition

35 Example: Shortest-Path

36 Data structure: The adjacency list (with edge weights) for the graph Each vertex v has a Node ID Let A v be the set of neighbors of v Let d v be the current distance from source to v

37 Example: Shortest-Path Data structure: The adjacency list (with edge weights) for the graph Each vertex v has a Node ID Let A v be the set of neighbors of v Let d v be the current distance from source to v Basic ideas: Original input is (s, [0, A s ]);

38 Example: Shortest-Path Data structure: The adjacency list (with edge weights) for the graph Each vertex v has a Node ID Let A v be the set of neighbors of v Let d v be the current distance from source to v Basic ideas: Original input is (s, [0, A s ]); On an input (v, [d v, A v ]), Mapper emits pairs whose key (i.e., vertex) is in A v, with a distance associated with d v

39 Example: Shortest-Path Data structure: The adjacency list (with edge weights) for the graph Each vertex v has a Node ID Let A v be the set of neighbors of v Let d v be the current distance from source to v Basic ideas: Original input is (s, [0, A s ]); On an input (v, [d v, A v ]), Mapper emits pairs whose key (i.e., vertex) is in A v, with a distance associated with d v On an input (v, [d v, A v ]*), Reducer emits a pair (v, [d v, A v ]) with the minimum distance d v.

40 Example: Shortest-Path Map(v, [d v, A v ]) Emit(v, [d v, A v ]); for each w in A v do Emit(w, [d v +wt(v, w), A w ]); Reduce(v, [d v, A v ]*) d min = +∞; for each [d v, A v ] in [d v, A v ]* if d min > d v then d min = d; Emit(v, [d, A v ]) Data structure: The adjacency list (with edge weights) for the graph Each vertex v has a Node ID Let A v be the set of neighbors of v Let d v be the current distance from source to v

41 MapReduce iterations – The first time we run the algorithm, we discover all neighbors of the source s – The second iteration, we discover all “2 nd level” neighbors of s – Each iteration expands the “search frontier” by one hop Example: Shortest-Path

42 MapReduce iterations – The first time we run the algorithm, we discover all neighbors of the source s – The second iteration, we discover all “2 nd level” neighbors of s – Each iteration expands the “search frontier” by one hop The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”) Example: Shortest-Path

43 MapReduce iterations – The first time we run the algorithm, we discover all neighbors of the source s – The second iteration, we discover all “2 nd level” neighbors of s – Each iteration expands the “search frontier” by one hop The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”) Need a “driver” algorithm to check termination of the algorithm ( in practice: Hadoop counters) Example: Shortest-Path

44 MapReduce iterations – The first time we run the algorithm, we discover all neighbors of the source s – The second iteration, we discover all “2 nd level” neighbors of s – Each iteration expands the “search frontier” by one hop The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”) Need a “driver” algorithm to check termination of the algorithm ( in practice: Hadoop counters) Can be extended to including the actual path. Example: Shortest-Path

45 Store graphs as adjacency lists; Graph algorithms with MapReduce: -- Each Map task receives a vertex and its outlinks; -- Map task computes some function of the link structure and then gives a value with target as the key; -- Reduce task collects these keys (target vertices) and aggregates Graph Iterate multiple MapReduce cycles until some termination condition -- graph structure is passed from one iteration to next The idea can be used to solve other graph problems Summary: MapReduce Graph Algorithms

46 46 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory

47 47 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory Pseudo-code for algorithms Big-Oh notation Divide-and-conquer Dynamic programming Solving recurrence relations

48 48 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory Data structures, algorithms, complexity Heap 2-3 trees Hashing Union-Find Finding median

49 49 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory DFS and BFS, and simple applications Connected components Topological sorting Strongly connected components Longest path in DAG

50 50 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory Maximum bandwidth paths Dijkstra’s algorithm (shortest path) Kruskal’s algorithm (MST) Bellman-Ford algorithm (shortest path) Matching in bipartite graphs Sequence alignment

51 51 CSCE-629 Course Summary Basic notations, concepts, and techniques Data manipulation Graph algorithms and applications Computational optimization NP-completeness theory P and polynomial-time computation Definition of NP, membership in NP Polynomial-time reducibility NP-hardness and NP-completeness Proving NP-hardness and NP-completeness NP-complete problems: SAT, IS, VC, Partition


Download ppt "Brief Overview on Bigdata, Hadoop, MapReduce Jianer Chen CSCE-629, Fall 2015."

Similar presentations


Ads by Google