Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5 Books: “Hadoop in Action” by Chuck Lam,

Similar presentations


Presentation on theme: "Lecture 5 Books: “Hadoop in Action” by Chuck Lam,"— Presentation transcript:

1 Lecture 5 Books: “Hadoop in Action” by Chuck Lam,
“An Introduction to Parallel Programming” by Peter Pacheco

2 Components of Hadoop Chapter 3
Managing fi les in HDFS Analyzing components of the MapReduce framework Reading and writing input and output data

3 Introduction We first cover HDFS, where you’ll store data that your Hadoop applications will process. Next we explain the MapReduce framework in more detail. In chapter 1 we’ve already seen a MapReduce program, but we discussed the logic only at the conceptual level. In this chapter we get to know the Java classes and methods, as well as the underlying processing steps. We also learn how to read and write using different data formats.

4 Working with files in HDFS
HDFS is a file system designed for large-scale distributed data processing under frameworks such as MapReduce. You can store a big data set of (say) 100 TB as a single file in HDFS , something that would overwhelm most other file systems. We discussed in chapter 2 how to replicate the data for availability and distribute it over multiple machines to enable parallel processing. HDFS abstracts these details away and gives you the illusion that you’re dealing with only a single fi le.

5 Working with HDFS As HDFS isn’t a native Unix file system, standard Unix file tools, such as ls and cp don’t work on it, and neither do standard file read/write operations, such as fopen() and fread(). On the other hand, Hadoop does provide a set of command line utilities that work similarly to the Linux file commands.

6 Basic file commands Hadoop file commands take the form of
hadoop fs -cmd <args> (In latest versions it uses hdfs dfs –cmd <args>) where cmd is the specific file command and <args> is a variable number of arguments. The command cmd is usually named after the corresponding Unix equivalent. For example, the command for listing files is hadoop fs –ls

7 ADDING FILES AND DIRECTORIES
Before you can run Hadoop programs on data stored in HDFS, you’ll need to put the data into HDFS first. Let’s assume you’ve already formatted and started a HDFS file system. (For learning purposes, we recommend a pseudo-distributed configuration as a playground.) Let’s create a directory and put a file in it. HDFS has a default working directory of /user/$USER, where $USER is your login user name. This directory isn’t automatically created for you, though, so let’s create it with the mkdir command. For the purpose of illustration, we use chuck. You should substitute your user name in the example commands. hadoop fs –mkdir /user/chuck

8 ls Let’s check on the directories with the ls command. hadoop fs -ls /
You’ll see this response showing the /user directory at the root / directory. Found 1 items drwxr-xr-x - chuck supergroup :23 /user If you want to see all the subdirectories, in a way similar to Unix’s ls with the –roption, you can use Hadoop’s lsr command . hadoop fs -lsr / You’ll see all the files and directories recursively. drwxr-xr-x - chuck supergroup :23 /user/chuck

9 Put files The Hadoop command put is used to copy fi les from the local system into HDFS. hadoop fs -put example.txt . Note the period (.) as the last argument in the command above. It means that we’re putting the file into the default working directory. The command above is equivalent to hadoop fs -put example.txt /user/chuck

10 RETRIEVING FILES The Hadoop command get does the exact reverse of put. It copies fi les from HDFS to the local file system. Let’s say we no longer have the example.txt fi le locally and we want to retrieve it from HDFS; we can run the command hadoop fs -get example.txt . to copy it into our local current working directory. Another way to access the data is to display it. The Hadoop cat command allows us to do that. hadoop fs -cat example.txt We can use the Hadoop file command with Unix pipes to send its output for further processing by other Unix commands. hadoop fs -cat example.txt | head

11 DELETING FILES You shouldn’t be too surprised by now that the Hadoop command for removing files is rm. hadoop fs –rm example.txt The rm command can also be used to delete empty directories.

12 LOOKING UP HELP A list of Hadoop fi le commands, together with the usage and description of each command, is given in the appendix. For the most part, the commands are modeled after their Unix equivalent. You can execute hadoop fs (with no parameters) to get a complete list of all commands available on your version of Hadoop. You can also use help to display the usage and a short description of each command. For example, to get a summary of ls, execute hadoop fs –help ls

13 Anatomy of a MapReduce program
As we have mentioned before, a MapReduce program processes data by manipulating (key/value) pairs in the general form map: (K1,V1) ➞ list(K2,V2) reduce: (K2,list(V2)) ➞ list(K3,V3) Not surprisingly, this is an overly generic representation of the data flow. In this section we learn more details about each stage in a typical MapReduce program.

14

15 Hadoop data types Despite our many discussions regarding keys and values, we have yet to mention their types. The MapReduce framework won’t allow them to be an arbitrary class. For example, although we can and often do talk about certain keys and values as integers, strings, and so on, they aren’t exactly standard Java objects, such as Integer, String, and so forth. This is because the MapReduce framework has a certain defined way of serializing the key/value pairs to move them across the cluster’s network, and only classes that support this kind of serialization can function as keys or values in the framework.

16 Data types

17 Mapper To serve as the mapper , a class implements from the Mapper interface and inherits the MapReduceBase class. The MapReduceBase class includes two methods that effectively act as the constructor and destructor for the class: void configure( JobConf job) —In this function you can extract the parameters set either by the configuration XML files or in the main class of your application. Call this function before any data processing begins. void close( )—As the last action before the map task terminates, this function should wrap up any loose ends—database connections, open files, and so on.

18 map The Mapper interface is responsible for the data processing step. It utilizes Java generics of the form Mapper<K1,V1,K2,V2> where the key classes and value classes implement the WritableComparable and Writable interfaces, respectively. Its single method is to process an individual (key/value) pair: void map(K1 key, V1 value, OutputCollector<K2,V2> output, Reporter reporter ) throws IOException The function generates a (possibly empty) list of (K2, V2) pairs for a given (K1, V1) input pair. The OutputCollector receives the output of the mapping process, and the Reporter provides the option to record extra information about the mapper as the task progresses.

19 Reducer As with any mapper implementation, a reducer must first extend the MapReduce base class to allow for configuration and cleanup. In addition, it must also implement the Reducer interface which has the following single method: void reduce(K2 key, Iterator<V2> values, OutputCollector<K3,V3> output, Reporter reporter ) throws IOException When the reducer task receives the output from the various mappers, it sorts the incoming data on the key of the (key/value) pair and groups together all values of the same key.

20 Partitioner— redirecting output from Mapper
A common misconception for first-time MapReduce programmers is to use only a single reducer. After all, a single reducer sorts all of your data before processing—and who doesn’t like sorted data? Our discussions regarding MapReduce expose the folly of such thinking. With multiple reducers, we need some way to determine the appropriate one to send a (key/value) pair outputted by a mapper. The default behavior is to hash the key to determine the reducer. Hadoop enforces this strategy by use of the HashPartitioner class. Between the map and reduce stages, a MapReduce application must take the output from the mapper tasks and distribute the results among the reducer tasks. This process is typically called shuffling , because the output of a mapper on a single node may be sent to reducers across multiple nodes in the cluster.

21 Combiner—local reduce

22 Reading and writing Let’s see how MapReduce reads input data and writes output data and focus on the file formats it uses. To enable easy distributed processing, MapReduce makes certain assumptions about the data it’s processing. It also provides flexibility in dealing with a variety of data formats. Input data usually resides in large files, typically tens or hundreds of gigabytes or even more. One of the fundamental principles of MapReduce’s processing power is the splitting of the input data into chunks. You can process these chunks in parallel using multiple machines. In Hadoop terminology these chunks are called input splits .

23 Data format You’ll recall that MapReduce works on key/value pairs. So far we’ve seen that Hadoop by default considers each line in the input file to be a record and the key/value pair is the byte offset (key) and content of the line (value), respectively. You may not have recorded all your data that way. Hadoop supports a few other data formats and allows you to define your own.

24 POPULAR INPUTFORMAT CLASSES

25 OutputFormat MapReduce outputs data into files using the OutputFormat class , which is analogous to the InputFormat class. The output has no splits, as each reducer writes its output only to its own file. The output files reside in a common directory and are typically named part-nnnnn, where nnnnn is the partition ID of the reducer. RecordWriter objects format the output and RecordReaders parse the format of the input

26 THE END Thank you


Download ppt "Lecture 5 Books: “Hadoop in Action” by Chuck Lam,"

Similar presentations


Ads by Google