2 Outline Architecture of Hadoop Distributed File System Hadoop usage at FacebookIdeas for Hadoop related research
3 Hadoop, Why? Need to process Multi Petabyte Datasets Expensive to build reliability in each application.Nodes fail every day– Failure is expected, rather than exceptional.– The number of nodes in a cluster is not constant.Need common infrastructure– Efficient, reliable, Open Source Apache LicenseThe above goals are same as Condor, butWorkloads are IO bound and not CPU bound
4 Hive, Why? Need a Multi Petabyte Warehouse Files are insufficient data abstractionsNeed tables, schemas, partitions, indicesSQL is highly popularNeed for an open data format– RDBMS have a closed data format– flexible schemaHive is a Hadoop subproject!
5 Hadoop & Hive History Dec 2004 – Google GFS paper published July 2005 – Nutch uses MapReduceFeb 2006 – Becomes Lucene subprojectApr 2007 – Yahoo! on 1000-node clusterJan 2008 – An Apache Top Level ProjectJul 2008 – A 4000 node test clusterSept 2008 – Hive becomes a Hadoop subproject
6 Who uses Hadoop? Amazon/A9 Facebook Google IBM Joost Last.fm New York TimesPowerSetVeohYahoo!
7 Commodity Hardware Typically in 2 level architecture – Nodes are commodity PCs– nodes/rack– Uplink from rack is 3-4 gigabit– Rack-internal is 1 gigabit
8 Goals of HDFS Very Large Distributed File System – 10K nodes, 100 million files, 10 PBAssumes Commodity Hardware– Files are replicated to handle hardware failure– Detect failures and recovers from themOptimized for Batch Processing– Data locations exposed so that computations can move to where data resides– Provides very high aggregate bandwidthUser Space, runs on heterogeneous OS
9 HDFS Architecture Cluster Membership NameNode 1. filename SecondaryNameNode2. BlckId, DataNodesoClient3.Read dataCluster MembershipNameNode : Maps a file to a file-id and list of MapNodesDataNode : Maps a block-id to a physical location on diskSecondaryNameNode: Periodic merge of Transaction logDataNodes
10 Distributed File System Single Namespace for entire clusterData Coherency– Write-once-read-many access model– Client can only append to existing filesFiles are broken up into blocks– Typically 128 MB block size– Each block replicated on multiple DataNodesIntelligent Client– Client can find location of blocks– Client accesses data directly from DataNode
12 NameNode Metadata Meta-data in Memory – The entire metadata is in main memory– No demand paging of meta-dataTypes of Metadata– List of files– List of Blocks for each file– List of DataNodes for each block– File attributes, e.g creation time, replication factorA Transaction Log– Records file creations, file deletions. etc
13 DataNode A Block Server – Stores data in the local file system (e.g. ext3)– Stores meta-data of a block (e.g. CRC)– Serves data and meta-data to ClientsBlock Report– Periodically sends a report of all existing blocks to the NameNodeFacilitates Pipelining of Data– Forwards data to other specified DataNodes
14 Block Placement -- One replica on local node Current Strategy -- Second replica on a remote rack-- Third replica on same remote rack-- Additional replicas are randomly placedClients read from nearest replicaWould like to make this policy pluggable
15 Data Correctness – Use CRC32 – Client computes checksum per 512 byte Use Checksums to validate data– Use CRC32File Creation– Client computes checksum per 512 byte– DataNode stores the checksumFile access– Client retrieves the data and checksum from DataNode– If Validation fails, Client tries other replicas
16 NameNode Failure – A directory on the local file system A single point of failureTransaction Log stored in multiple directories– A directory on the local file system– A directory on a remote file system (NFS/CIFS)Need to develop a real HA solution
17 Data PipeliningClient retrieves a list of DataNodes on which to place replicas of a blockClient writes block to the first DataNodeThe first DataNode forwards the data to the next DataNode in the PipelineWhen all replicas are written, the Client moves on to write the next block in file
18 Rebalancer Goal: % disk full on DataNodes should be similar Usually run when new DataNodes are addedCluster is online when Rebalancer is activeRebalancer is throttled to avoid network congestionCommand line tool
19 Hadoop Map/Reduce The Map-Reduce programming model – Framework for distributed processing of large data sets– Pluggable user code runs in generic frameworkCommon design pattern in data processingcat * | grep | sort | unique -c | cat > fileinput | map | shuffle | reduce | outputNatural for:– Log processing– Web search indexing– Ad-hoc queries
20 Hadoop at Facebook Production cluster Test cluster 4800 cores, 600 machines, 16GB per machine – April 20098000 cores, 1000 machines, 32 GB per machine – July 20094 SATA disks of 1 TB each per machine2 level network hierarchy, 40 machines per rackTotal cluster size is 2 PB, projected to be 12 PB in Q3 2009Test cluster800 cores, 16GB each
21 Data Flow Web Servers Scribe Servers Network Storage Oracle RAC This is the architecture of our backend data warehouing system. This system provides important information on the usage of our website, including but not limited to the number page views of each page, the number of active users in each country, etc.We generate 3TB of compressed log data every day. All these data are stored and processed by the hadoop cluster which consists of over 600 machines. The summary of the log data is then copied to Oracle and MySQL databases, to make sure it is easy for people to access.Oracle RACHadoop ClusterMySQL
22 Hadoop and Hive Usage Statistics : Barrier to entry is reduced: 15 TB uncompressed data ingested per day55TB of compressed data scanned per day3200+ jobs on production cluster per day80M compute minutes per dayBarrier to entry is reduced:80+ engineers have run jobs on Hadoop platformAnalysts (non-engineers) starting to use Hadoop through Hive
26 Job Sheduling Current state of affairs FIFO and Fair Share scheduler Checkpointing and parallelism tied togetherTopics for ResearchCycle scavenging schedulerSeparate checkpointing and parallelismUse resource matchmaking to support heterogeneous Hadoop compute clustersScheduler and API for MPI workload
27 Commodity Networks Machines and software are commodity Networking components are notHigh-end costly switches neededHadoop assumes hierarchical topologyDesign new topology based on commodity hardware
28 More Ideas for Research Hadoop Log AnalysisFailure prediction and root cause analysisHadoop Data RebalancingBased on access patterns and loadBest use of flash memory?