Concurrent Algorithms

Slides:



Advertisements
Similar presentations
Section 5: More Parallel Algorithms
Advertisements

EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Google’s Map Reduce. Commodity Clusters Web data sets can be very large – Tens to hundreds of terabytes Standard architecture emerging: – Cluster of commodity.
Parallel K-Means Clustering Based on MapReduce The Key Laboratory of Intelligent Information Processing, Chinese Academy of Sciences Weizhong Zhao, Huifang.
天文信息技术联合实验室 New Progress On Astronomical Cross-Match Research Zhao Qing.
Vyassa Baratham, Stony Brook University April 20, 2013, 1:05-2:05pm cSplash 2013.
Design Patterns for Efficient Graph Algorithms in MapReduce Jimmy Lin and Michael Schatz (Slides by Tyler S. Randolph)
Concurrent Algorithms. Summing the elements of an array
Iterators, Linked Lists, MapReduce, Dictionaries, and List Comprehensions... OH MY! Special thanks to Scott Shawcroft, Ryan Tucker, and Paul Beck for their.
Higher Order Functions Special thanks to Scott Shawcroft, Ryan Tucker, and Paul Beck for their work on these slides. Except where otherwise noted, this.
 Frequent Word Combinations Mining and Indexing on HBase Hemanth Gokavarapu Santhosh Kumar Saminathan.
Map-Reduce examples 1. So, what is it? A two phase process geared toward optimizing broad, widely distributed parallel computing platforms Apache Hadoop.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
HADOOP Priyanshu Jha A.D.Dilip 6 th IT. Map Reduce patented[1] software framework introduced by Google to support distributed computing on large data.
MapReduce, Dictionaries, List Comprehensions Special thanks to Scott Shawcroft, Ryan Tucker, and Paul Beck for their work on these slides. Except where.
Item Based Recommender System SUPERVISED BY: DR. MANISH KUMAR BAJPAI TARUN BHATIA ( ) VAIBHAV JAISWAL( )
Image taken from: slideshare
Sorting.
”Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters” Published In SIGMOD '07 By Yahoo! Senthil Nathan N IIT Bombay.
Recursion Version 1.0.
Higher Order Functions
MapReduce Compiler RHadoop
SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data - Aditi Thuse.
Algorithm Design and Analysis (ADA)
MSc/ICY Software Workshop , Semester 2
May 17th – Comparison Sorts
Algorithmic complexity: Speed of algorithms
Python: Control Structures
Map Reduce.
Concurrent Algorithms
Central Florida Business Intelligence User Group
Overview of Hadoop MapReduce MapReduce is a soft work framework for easily writing applications which process vast amounts of.
Efficiency add remove find unsorted array O(1) O(n) sorted array
Parallel Sorting Algorithms
Sorting Chapter 13 Nyhoff, ADTs, Data Structures and Problem Solving with C++, Second Edition, © 2005 Pearson Education, Inc. All rights reserved
Quicksort and Mergesort
CS 3343: Analysis of Algorithms
MapReduce Algorithm Design
Cse 344 May 4th – Map/Reduce.
Concurrent Algorithms
MapReduce Algorithm Design Adapted from Jimmy Lin’s slides.
Unit-2 Divide and Conquer
MapReduce, Dictionaries, List Comprehensions
CS110: Discussion about Spark
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
KMeans Clustering on Hadoop Fall 2013 Elke A. Rundensteiner
CSE332: Data Abstractions Lecture 20: Parallel Prefix, Pack, and Sorting Dan Grossman Spring 2012.
Word Co-occurrence Chapter 3, Lin and Dyer.
Distributed System Gang Wu Spring,2018.
Parallel Computation Patterns (Reduction)
Parallel Sorting Algorithms
Algorithmic complexity: Speed of algorithms
Divide and Conquer (Merge Sort)
Charles Tappert Seidenberg School of CSIS, Pace University
Lambda Functions, MapReduce and List Comprehensions
Copyright © Aiman Hanna All rights reserved
Introduction to Spark.
Algorithmic complexity: Speed of algorithms
Divide & Conquer Algorithms
Intro to Computer Science CS1510 Dr. Sarah Diesburg
CSCE 3110 Data Structures & Algorithm Analysis
CSCE 3110 Data Structures & Algorithm Analysis
Concurrent Algorithms
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
CSCE 3110 Data Structures & Algorithm Analysis
Concurrent Algorithms
ITEC324 Principle of CS III
MapReduce: Simplified Data Processing on Large Clusters
Divide and Conquer Merge sort and quick sort Binary search
Distributed Systems and Concurrency: Map Reduce
Presentation transcript:

Concurrent Algorithms

Summing the elements of an array 76 35 41 10 25 31 10 7 3 15 10 13 18 6 4

Parallel sum and parallel prefix sum It’s relatively easy to see how to sum up the elements of an array in a parallel fashion This is a special case of a reduce operation—combining a number of values into a single value It’s harder to see how to do a prefix (cumulative) sum For example, the list [3, 1, 4, 1, 6] to [3, 4, 8, 9, 15] This is a special case of what is sometimes called a scan operation An example is shown on the next slide The algorithm is done in two passes: The first pass is “up” the tree, retaining the summands The second pass is “down” the tree Note: These two examples are from Principles of Parallel Programming by Calvin Lin and Lawrence Snyder

Summing the elements of an array 76 = 35 + 41 35 (0+35) 35 = 10 + 25 41 = 31 + 10 35 10 (0+10) 66 (35+31) 10 = 7 + 3 25 = 15 + 10 31 = 13 + 18 10 = 6 + 4 35 25 (10+15) 7 10 66 48 (41+13) 72 7 3 15 10 13 18 6 4 7 10 25 35 48 66 72 76

Using parallel prefix sum to filter Problem: Filter an array of N values, yielding an array of M values, where M <= N Apply the filter operation to each element of the sequence (in parallel), yielding 1’s and 0’s Starting with -1, perform prefix sum on the resultant list The first element for each sum is to be kept Example: Selecting only even numbers 3 1 4 1 5 9 2 6 5 3 6 0 0 1 0 0 0 1 1 0 0 1 -1 -1 -1 0 0 0 0 1 2 2 2 3 a[0]=4, a[1]=2, a[2]=6, a[3]=6

Batcher’s Bitonic sort Batcher’s bitonic sort is a sorting algorithm with the following characteristics: It’s a variation of MergeSort It’s designed for 2n processors It fully occupies all 2n processors Unlike array sum, which uses fewer processors on each pass I’m not going to go through this algorithm—I just want you to be able to say you’ve heard of it 

MapReduce MapReduce is a patented technique perfected by Google to deal with huge data sets on clusters of computers From Wikipedia: "Map" step: The master node takes the input, chops it up into smaller sub-problems, and distributes those to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes that smaller problem, and passes the answer back to its master node. "Reduce" step: The master node then takes the answers to all the sub-problems and combines them in a way to get the output - the answer to the problem it was originally trying to solve. Hadoop is a free Apache version of MapReduce

Basic idea of MapReduce In MapReduce, the programmer has to write only two functions, and the framework takes care of everything else The Map function is applied (in parallel) to each item of data, producing a list of key-value pairs The framework collects all the lists, and groups the key-value pairs by key The Reduce function is applied (in parallel) to each group, returning either a single value, or nothing The framework collects all the returns

MapReduce picture Source: http://www.google.com/imgres?imgurl=http://people.apache.org/~rdonkin/hadoop-talk/diagrams/map-reduce.png&imgrefurl=http://people.apache.org/~rdonkin/hadoop-talk/hadoop.html&usg=__-o9xqKbO4FaSEXpfViPX2cgesJo=&h=393&w=504&sz=12&hl=en&start=0&sig2=m4ExSHfMsoQUbWTbGTHwwA&zoom=1&tbnid=xi4TPuXkbg5f-M:&tbnh=150&tbnw=193&ei=voOwTe6mM6Xt0gGJtd2SCQ&prev=/images%3Fq%3Dmapreduce%26hl%3Den%26safe%3Doff%26biw%3D981%26bih%3D666%26gbv%3D2%26tbm%3Disch&itbs=1&iact=rc&dur=505&page=1&ndsp=12&ved=1t:429,r:11,s:0&tx=48&ty=53

Example: Counting words (Python) The following Python program counts how many times each word occurs in a set of data, and returns the list of words and their counts def mapper(key, value): words=key.split() for word in words: Wmr.emit(word, '1') def reducer(key, iter): sum = 0 for s in iter: sum = sum + int(s) Wmr.emit(key, str(sum))

Example: Counting words (Java) * Mapper for word count */ class Mapper { public void mapper(String key, String value) { String words[] = key.split(" "); int i = 0; for (i = 0; i < words.length; i++) Wmr.emit(words[i], "1"); } } /* Reducer for word count */ class Reducer { public void reducer(String key, WmrIterator iter) { int sum = 0; while (iter.hasNext()) { sum += Integer.parseInt(iter.next()); } Wmr.emit(key, Integer.valueOf(sum).toString()); } }

Example: Average movie ratings #!/usr/bin/env python def mapper(key, value): avgRating = float(value) binRating = 0.0 if (0 < avgRating < 1.25): binRating = 1.0 elif (1.25 <= avgRating < 1.75): binRating = 1.5 elif (1.75 <= avgRating < 2.25): binRating = 2.0 elif (2.25 <= avgRating < 2.75): binRating = 2.5 elif (2.75 <= avgRating < 3.25): binRating = 3.0 elif (3.25 <= avgRating < 3.75): binRating = 3.5 elif (3.75 <= avgRating < 4.25): binRating = 4.0 elif (4.25 <= avgRating < 4.75): binRating = 4.5 elif (4.75 <= avgRating < 5.0): binRating = 5.0 else: binRating = 99.0 Wmr.emit(str(binRating), key) #!/usr/bin/env python def reducer(key, iter): count = 0 for s in iter: count = count + 1 Wmr.emit(key, str(count))

The End