Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Overview. Credits  Author: Michael Guenther  Editor: Aaron Loucks  Dancing Elephants: Michael V. Shuman.

Similar presentations


Presentation on theme: "An Overview. Credits  Author: Michael Guenther  Editor: Aaron Loucks  Dancing Elephants: Michael V. Shuman."— Presentation transcript:

1 An Overview

2 Credits  Author: Michael Guenther  Editor: Aaron Loucks  Dancing Elephants: Michael V. Shuman

3 What developers and architects see 3

4 What capacity planning folks see 4

5 What network folks see 5

6 What operations folks see 6

7

8 Learning the Hard Way

9 Introductions  Aaron Loucks Senior Technical Operations Engineer, CCHA ~11 months active Hadoop Admin Experience  Michael Guenther Technical Operations Team Lead, CCHA ~16 months active Hadoop Admin Experience

10 Learning the Hard Way  Early Adoption of Hadoop has some of it’s own issues. The knowledge base is growing, but still pretty thin. Manning (finally) released their book, so now we have 3 Hadoop books. HBase has even less documentation available and no books. (July for Lars George’s book. Probably, hopefully) Cloudera didn’t officially support HBase until CDH3

11 Playing Catch Up  IS and Ops came to the game a bit later than development so we had to play catch up early on in the project.  We had to write a lot of our own tools and implement our own processes (rack awareness, log cleanup, metadata backups, deploy configs, etc.)  Additionally, we needed to learn a lot about Linux system details and network setup and configuration.

12 New Admin Blues  Tech Ops (Aaron and I), aren’t part of the IS department. This might be different at your company. Some places, Ops are part of IS. The correct model depends on staffing and which group fulfills various enterprise roles.  Administrating Hadoop/HBase created a problem for our traditional support model and non-SA activity on the machines  It took some time to get used to the new system and what was needed for us to run and maintain it. Most of which changed with CDH3.

13 Enterprise Wide Admins?  Since we have no centralized team administrating all clouds, configuration and set up varies across the enterprise creating additional challenges.  Staffing Hadoop Administrators is difficult. Especially since we aren’t in the Bay Area.

14 Configuration File Management  Configuration File Management can be a challenge. We settled on a central folder on a common mount and an ssh script to push configs. Cloudera recommended using Puppet or Chef. We haven’t made that jump yet. When the cluster goes heterogeneous, we will investigate further.

15 A Look At Our Cluster

16 Initial Cluster Setup  Prod started with 3 Masters and 20 DNs across 2 racks (10 and 10)  UAT started with 3 Masters and 15 DNs across 2 racks (5 and 10)

17 Current Hardware Breakdown  Name Node (HMaster), Secondary Name Node, and Job Tracker Dell R710’s Dual Intel Quad-cores (spec) 72GB of RAM SCSI Drives in RAID configuration (~70GB)  Data Nodes (Task Tracker, Data Node, Region Server) - 30 nodes Dell R410’s Dual Intel Quad-cores 64GB of RAM 4x2TB 5400RPM SATA in JBOD configuration  Zookeeper Servers (Standalone mode) Dell R610’s Dual Intel Quad-cores (Specs) 32GB of RAM SCSI Drives in RAID configuration (~70GB)

18 Cluster Network Information  Rack Details TOR Switches Cisco 4948’s 1GB/E links to TOR 42U rack, ~32U usable for servers  Network TORs are 1GB/E to Core (Cisco 6509’s). Channel bonding possible if needed. 10GB/E is being investigated if needed. 192GB Backbone

19 Growing Our Cluster  Early on, we were unsure of how many servers were needed for launch.  Capacity planning was a total unknown: Reserving data center space was very difficult. Budgeting for future growth was also difficult.

20 Ideal Growth Versus Reality  When we did add new servers, we ran into rack space issues.  Our rack breakdown for UAT datanodes is 5, 10, and 15 servers  Uneven datanode distribution isn’t handled well by HBase and Hadoop.  Re-racking was not an option.  Options: Turn off rack-awareness, go with the uneven rack arrangement, or lie to Hadoop?

21 Server Build Out  Initially, we received new machines from the Sys Admins and we had to install Hadoop and HBase.  We worked with the SAs to create a Cobbler image for new types of Hadoop servers.  Now, new machines only need configuration files and are ready for use.

22 First Cluster Growth Issue  Since we had to spoof rack-awareness, mis-replicated blocks started showing up.  Run the balancer to fix it right?  Not quite. The Hadoop balancer doesn’t fix mis-replicated blocks.  You have to modify your replication factor on the folders with mis-replicated blocks.

23 Be Paranoid.

24 Paranoia – It’s Not So Bad  Be paranoid. Hadoop punishes the unwary (trust us).  Two dfs.name.dir folders are a must.  Back up your Name Node images and edits on a regular basis (hourly).  Run fsck –blocks / once a day.  Run your dfsadmin –report once a week.

25 Paranoia - You’ll Get Used To It  Check your various web pages once a day. Name Node, Job Tracker, and HMaster  Set up monitoring and alerting.  Set up your trash option in HDFS to greater than the 10 minute default.  Lock down your cluster Keep everyone off of your cluster Provide a client server for user interaction. Fuse is a good addition to this server.

26 Backing Up Your Cluster  Again, multiple dfs.name.dirs  Run wget’s regularly on the namenode image and edits URL to create a backup.  Back up your config files prior to any major change (or even minor).  Save your job statistics to HDFS. mapred.job.tracker.persist.jobstatus.dir  Data Node Metadata  Zookeeper Data Directory

27 Learning by Experience (Sometimes Painful Experience)

28 Issues and Epiphanies  Pinning your yum repository We had this for our cloudera repo mirror list initially: ○ mirrorlist=http://archive.cloudera.com/redhat/c dh/3/mirrors That’s the latest and greatest CDH3 build repo (B2, B3, B4, etc). We are on CDH3B3, so we needed to set our repo mirror list to this: ○ mirrorlist=http://archive.cloudera.com/redhat/c dh/3b3/mirrors

29 Issues and Epiphanies  FSCK returned CORRUPT! Initially, we thought this was much, much worse than it turned out to be when it happened. It’s still bad, but only the files listed as corrupt are lost. It wasn’t the swath of destruction we thought it would be.  Cloudera might be able to work some magic, but you’ve almost certainly lost your file(s).

30 Issues and Epiphanies  Sudo permissions are key We avoid using root whenever possible. All config files and folders are owned by our generic account. Our generic account has some nice permissions though: ○ sudo –u hdfs/hbase/zookeeper/mapred * ○ sudo /etc/init.d/hbase-regionserver * ○ sudo /etc/init.d/hadoop * root access might be extremely difficult to come by. It depends heavily on your business and IS policies.  These cover 95% of our day-to-day activity on the cloud.

31 Issues and Epiphanies  Document EVERYTHING It’s a bit tiresome at first, but issues can sometimes be months between reoccurrence. Write it down now and save yourself having to research again. This is especially true when you are setting up your first cluster. There’s a lot to learn, it’s really easy to forget. Pay special attention to the error message that goes along with the problem. HBase tends to have extremely vague exceptions and error logging.

32 Issues and Epiphanies  Fair Scheduler Woes While nice, the fair scheduler page has caused some serious problems. Users grow frustrated when their jobs aren’t running, so they increase the priority. Now their job is running, but others are being starved. We ended up restricting page access to a very small subset of users.

33 Issues and Epiphanies  Do NOT let dfs.name.dir run out of space. This is extremely bad news if you only have one dfs.name.dir. We have two ○ One Name Node local mount directory ○ One SAN mount (also our common mount)  You absolutely need monitoring in place to keep this from happening.

34 Issues and Epiphanies  Smaller Issues Missing pid files? Users receive a zip file exception when running a job CDH3 Install/Upgrade requires a local hadoop user. The Job Tracker complains about port already in use. Check your mapred-site.xml.

35 Issues and Epiphanies  Memory Settings – Hadoop Set your SN and NN to be the same size. Set your starting JVM starting size to be equal to your max. Set your memory explicitly per process, not using HADOOP_HEAPSIZE. Set your map and reduce heap size as final in your mapred-site.xml.

36 HBase Issues and Epiphanies  Set your hbase user’s ulimits high – 64k is good.  Sometimes the HBase take a really long time to start back up (2 hours one Saturday).  0.89 WAL File corruption problem.  Keep your quorum off of your data nodes (off that rack really).  HBase is extremely sensitive to network events/maintenance/connectivity issues/etc.

37 HBase Issues and Epiphanies  Memory Settings – HBase Region Servers need a lot more memory than your HMaster. Region Servers can, and will, run out of memory and crash. Rowcounter is your friend for non- responsive region servers. Zookeeper should be set to 1 GB of JVM heap. Talk to Cloudera about special JVM settings for your HBase daemons.

38 Questions? We Might Have Answers.


Download ppt "An Overview. Credits  Author: Michael Guenther  Editor: Aaron Loucks  Dancing Elephants: Michael V. Shuman."

Similar presentations


Ads by Google