Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE324 DISTRIBUTED SYSTEMS FALL 2015 Google File System.

Similar presentations


Presentation on theme: "EE324 DISTRIBUTED SYSTEMS FALL 2015 Google File System."— Presentation transcript:

1 EE324 DISTRIBUTED SYSTEMS FALL 2015 Google File System

2 Last Lecture 2  DFS programming assignment overview  Submission via SVN  Virtual Machine image will be provided for part 2~4.  Install VMWare Player or Virtual Box (both are free).  Part 1: 40% (libfuse not needed)  Part 2, Part 3: 30% each  Part 4: Extra credit 15~20% (why extracredit?)

3 Reading 3  “The Google File System”, Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, SOSP 2003  http://queue.acm.org/detail.cfm?id=1594206

4 Overview 4  Google File System  Next Lecture  Data Intensive Computing  MapReduce

5 Typical HPC Machine Compute Nodes High end processor(s) Lots of RAM Network Specialized Very high performance Storage Server RAID-based disk array Network Compute Nodes Storage Server CPU Mem CPU Mem CPU Mem

6 Google’s Datacenter

7 Google’s Server

8 Google Data Centers  Dalles, Oregon  Hydroelectric power @ 2¢ / KW Hr  50 Megawatts Enough to power 60,000 homes  Engineered for maximum modularity & power efficiency  Container: 1160 servers, 250KW  Server: 2 disks, 2 processors

9 Typical Cluster Machine Compute + Storage Nodes Medium-performance processors Modest memory 1-2 disks Network Conventional Ethernet switches 10 Gb/s within rack 100 Gb/s across racks Network Compute + Storage Nodes CPU Mem CPU Mem CPU Mem

10 Google Platform Characteristics 10  Lots of cheap PCs, each with disk and CPU  High aggregate storage capacity  Spread search processing across many CPUs  How to share data among PCs?

11 Canonical Data Center Architecture Core (L3) Edge (L2) Top-of-Rack Aggregation (L2) Application servers

12 ToR switch 12

13 Google Platform Characteristics 13  More than 1 million servers (PCs).  envisions 10 million servers.  Many modes of failure for each PC:  App bugs, OS bugs  Human error  Disk failure, memory failure, net failure, power supply failure  Connector failure  Monitoring, fault tolerance, auto-recovery essential

14 Google File System (GFS) Goals 14  Provide fault tolerance while running on inexpensive hardware  Deliver high aggregate performance to a large number of clients.  Other common goals: performance, scalability, reliability, and availability.

15 GFS Design Considerations (1) 15  Component failures are the norm (rather than exceptions)  Cheap, unreliable components  Many components  Disk failure rate ~1 % x 100,000 disks  Monitoring, error detection, fault tolerance, and automatic discovery are a must!

16 Disk Failure 16  All disks were high-performance enterprise disks (SCSI or FC), whose datasheet MTTF of 1,200,000 hours suggest a nominal annual failure rate of at most 0.75%.  Actual annual disk replacement rates:  exceed 1%, with 2-4% common and up to 12%

17 Life cycle failure pattern of hard drives 17  Expected pattern

18 Disk Failure Rate at Google [FAST2007] 18

19 GFS Design Considerations (2) 19  Files are huge.  Google does not manage billions of KB-sized files.  Each file contains many Web objects.  Most file writes are append.  Once written, files are often read sequentially. (Think about what google does.)  Appending becomes the main focus for performance optimization. Atomicity guarantee important.  Caching becomes much less important.

20 GFS Design Considerations (3) 20  Co-design application and the file system API for maximum benefit  Relaxed consistency model  Atomic append  Application handles some stuff that FS doesn’t provide.  ~1000 storage nodes per cluster in 2004.

21 Design Criteria Summarized. 21  Detect, tolerate, recover from failures automatically  Large files, >= 100 MB in size  Handle > 10^7 files  Large, streaming reads (>= 1 MB in size): Read once  Large, sequential writes that append: Write once  Concurrent appends by multiple clients (e.g., producer- consumer queues)  Want atomicity for appends without synchronization overhead among clients

22 GFS: Architecture (1) 22

23 GFS: Architecture (2) 23  One master server (state replicated on backups)  Many chunk servers (100s – 1000s)  Spread across racks; intra-rack b/w greater than inter- rack  Chunk: 64 MB portion of file, identified by 64-bit, globally unique ID  Many clients accessing same and different files stored on same cluster

24 GFS files 24  Divided into fixed-size chunks. (64MB)  Each identified by an immutable and globally unique 64 bit chunk handle  Handle assigned by the master at the time of creation  Chunk servers store chunks on local disks as Linux files

25 Master Server 25  Holds all metadata:  Namespace (directory hierarchy)  Access control information (per-file)  Mapping from files to chunks  Current locations of chunks (chunkservers)  Delegates consistency management  Garbage collects orphaned chunks  Migrates chunks between chunkservers Holds all metadata in RAM; very fast operations on file system metadata

26 Metadata 26  64 bytes of metadata for each 64 MB  1GB index  1000 TB files

27 Chunkserver 27  Read/write requests specify chunk handle and byte range  Chunks replicated on configurable number of chunk servers (default: 3)  No caching of file data (beyond standard Linux buffer cache)

28 GFS Client 28  Applications link to GFS library.  No POSIX API. No need to go through VFS.

29 GFS Client 29  Issues control (metadata) requests to master server  Issues data requests directly to chunkservers  Caches metadata  Does no caching of data  No consistency difficulties among clients  Streaming reads (read once) and append writes (write once) don’t benefit much from caching at client

30 Client Read 30  Application requests (file name, offset)  Client translates offset to chunk index and sends master:  read(file name, chunk index)  Master’s reply:  chunk ID, chunk version number, locations of replicas  Client sends “closest” chunk server w/replica:  read(chunk ID, byte range)  “Closest” determined by IP address on simple rack-based network topology  Chunk server replies with data

31 Client Write 31  Some chunkserver is primary for each chunk  Master grants lease to primary (typically for 60 sec.)  Leases renewed using periodic heartbeat messages between master and chunkservers  Client asks master for primary and secondary replicas for each chunk  Client sends data to replicas in daisy chain  Pipelined: each replica forwards as it receives  Takes advantage of full-duplex Ethernet links

32 Client Write (2) 32

33 Client Write (3) 33  The client pushes the data to all the replicas. Chained.  Client sends write request (control) to primary  Primary assigns serial number to write request, providing ordering  Primary forwards write request with same serial number to secondaries  Secondaries all reply to primary after completing write  Primary replies to client

34 Client Record Append 34  Common case: request fits in current last chunk:  Primary appends data to own replica  Primary tells secondaries to do same at same byte offset in theirs  Primary replies with success to client

35 Client Record Append (2) 35  When data won’t fit in last chunk:  Primary fills current chunk with padding  Primary instructs other replicas to do same  Primary replies to client, “retry on next chunk”  If record append fails at any replica, client retries operation  So replicas of same chunk may contain different data—even duplicates of all or part of record data  What guarantee does GFS provide on success?  Data written at least once in atomic unit

36 GFS: Consistency Model 36  Changes to namespace (i.e., metadata) are atomic  Done by single master server!  Master uses log to define global total order of namespace- changing operations

37 GFS: Consistency Model (2) 37  Changes to data are ordered as chosen by a primary  All replicas will be consistent (unless there is a failure)  But multiple writes from the same client may be interleaved or overwritten by concurrent operations from other clients  Record append completes at least once, at offset of GFS’s choosing  Applications must cope with possible duplicates

38 Applications and Record Append Semantics 38  Applications should use self-describing records and checksums when using Record Append  Reader can identify padding / record fragments  If application cannot tolerate duplicated records, should include unique ID in record  Reader can use unique IDs to filter duplicates

39 Logging at Master 39  Master has all metadata information  Lose it, and you’ve lost the filesystem!  Master logs all client requests to disk sequentially  Replicates log entries to remote backup servers  Only replies to client after log entries safe on disk on self and backups!

40 Chunk Leases and Version Numbers 40  If no outstanding lease when client requests write, master grants new one  Chunks have version numbers  Stored on disk at master and chunkservers  Each time master grants new lease, increments version, informs all replicas  Master can revoke leases  e.g., when client requests rename or snapshot of file

41 What If the Master Reboots? 41  Replays log from disk  Recovers namespace (directory) information  Recovers file-to-chunk-ID mapping  Asks chunkservers which chunks they hold  Recovers chunk-ID-to-chunkserver mapping  If chunk server has older chunk, it’s stale  Chunk server down at lease renewal  If chunk server has newer chunk, adopt its version number  Master may have failed while granting lease

42 What if Chunkserver Fails? 42  Master notices missing heartbeats  Master decrements count of replicas for all chunks on dead chunkserver  Master re-replicates chunks missing replicas in background  Highest priority for chunks missing greatest number of replicas

43 GFS: Summary 43  Success: used actively by Google to support search service and other applications  Availability and recoverability on cheap hardware  High throughput by decoupling control and data  Supports massive data sets and concurrent appends  Semantics not transparent to apps  Must verify file contents to avoid inconsistent regions, repeated appends (at-least-once semantics)  Performance not good for all apps  Assumes read-once, write-once workload (no client caching!)

44 Reading 44  Case Study GFS: EVOLUTION ON FAST- FORWARD A DISCUSSION BETWEEN KIRK MCKUSICK AND SEAN QUINLAN ABOUT THE ORIGIN AND EVOLUTION OF THE GOOGLE FILE SYSTEM. http://queue.acm.org/detail.cfm?id=1594206 How things are evolving at Google.


Download ppt "EE324 DISTRIBUTED SYSTEMS FALL 2015 Google File System."

Similar presentations


Ads by Google