Presentation is loading. Please wait.

Presentation is loading. Please wait.

Map Reduce Workshop Monday November 12th, 2012

Similar presentations


Presentation on theme: "Map Reduce Workshop Monday November 12th, 2012"— Presentation transcript:

1 Map Reduce Workshop Monday November 12th, 2012
Pre-Workshop Deck Map Reduce Workshop Monday November 12th, 2012

2 Overview: Schedule 5:30-6:00 Networking, Software Install, Cloud Setup 6:00-6:10 M/R and Workshop Overview - John Verostek 6:10-7:20 Map/Reduce Tutorial - Vipin Sachdeva (IBM Research Labs) The Map/Reduce Programming Framework will be introduced using a hands-on Word Count example using Python. Next the basics of Hadoop Map/Reduce and File Server will be covered. Time permitting, a demo will be given of running the Python M/R program using Hadoop installed locally. 7:20-7:30 Short Break 7:30-8:45 Applications using Amazon Elastic M/R - J Singh (DataThinks.org) A Facebook application will also be walked through. For this dataset, everyone who attends the workshop will have the option to sign into a workshop prep page with their Facebook account and give permission to share their likes. The data is automatically anonymized and sent to an Amazon S3 file. The exercise will find likes common to people in the sample. What might someone do after the analysis of such data? Design an advertising campaign, perhaps (but designing an ad campaign is not part of the workshop).

3 Pre-Workshop Checklist
Cloud: Amazon Web Services to be used Facebook Likes Exercise: App used to anonymously collect “Likes” Dataset to be used for M/R exercise Software Installation: Python will be used to run programs locally Download 2.7.3, set Environmental Variables Code and Datasets: Included are links to Files located up on Amazon Running Python Locally: Various screen shots to try out a regular (not a Map / Reduce) wordcount program

4 Cloud Account Please sign-up for an account here: Amazon’s Elastic Map/Reduce will be used. The following 6-minute AWS video shows a wordcount example that is somewhat similar to what will be used in the workshop. There is enough info in this video to within 15-minutes run a M/R job.

5 We will be using Elastic MapReduce and S3

6 Python Scripts and Wikipedia Datasets
What URL Word Counter (Non-Map/Reduce) Word Counter Mapper Sorter for Windows-machines Word Counter Reducer sorter.py for folks with Windows machines After loading the link into the browser, and text appearing, then use “right-click”, “save as” Dataset Size URL Very Small 2 KB Small 10 MB Medium 76 MB Large 1 GB Very Large 8 GB These five files of different sizes were created by Vipin to test out the time to run each one locally. Please note the 8GB may take a while to download.

7 Facebook Likes Exercise
Please sign in into a workshop prep page with your Facebook account and give permission to share their likes. If you don't have a Facebook account then no worries. If everyone opts out, we won't have much data to work with. All collected data will be anonymized and then deleted after the workshop is done. The exercise will find likes common to people in the sample. What might someone do after the analysis of such data? Design an advertising campaign, perhaps (though designing an ad campaign is not part of the workshop). The App that collects Facebook data is:

8 Should be where you can just copy and paste the URL into the browser where you have Facebook set up
Then something like this should appear after it has pulled over the Likes.

9 Python Download Mac/Linux comes installed with Python (should be able to run). Windows : if you do not already have Python installed, then use the following website to download and install: DOWNLOAD VERSION 2.7.3 DO NOT DOWNLOAD VERSION 3.3.0

10

11 Python Installation

12 Python Environmental Variables
There are many online instructions that explain this such as the below link:

13 More Python Help

14 We will be using the Command Line
Get to the DIR where your code/data is located CD to call a directory CD.. to go up a level e.g. cd users\john\desktop\python Ctrl C to kill

15 Wordcount - Very Simple Python Script – seq.py
#!/usr/bin/env python import sys import re counts={} for line in sys.stdin: words = re.split("\W+",line) for word in words: counts[word]=counts.get(word,0)+1; for x,y in counts.items(): print x,y Import system functions and regular expressions library Read from stdin Split along newline character (W+) Let’s try running it!! Python Code seq.py Use Wikipedia Dataset filename Output File – Can be any name > python ./seq.py < input.txt > output.txt Make sure you have the < and > pointing in the right directions or could over-write input file.

16

17 > python ./seq.py < input.txt
No Output File so goes to screen > python ./seq.py < input.txt CTRL-C to break

18 Time Mac OS/Linux/Cygwin > time python ./seq.py < input.txt
Windows If you run the datasets locally, and “time” how long they take (output to file not screen), then then please me the results (including laptop info, etc) as we are making a composite slide of “time vs. file size.” Category Size Time Very small 2KB Small 10 MB Medium 80 MB Large 1GB Very large 8 GB My Windows laptop took 4 hours* * I noted the start time, 11:46 PM and then ran overnight and went with the timestamp on the output file; 03:54 AM.


Download ppt "Map Reduce Workshop Monday November 12th, 2012"

Similar presentations


Ads by Google