Presentation is loading. Please wait.

Presentation is loading. Please wait.

GFS : Google File System Ömer Faruk İnce Fatih University - Computer Engineering Cloud Computing 25.03.2014.

Similar presentations


Presentation on theme: "GFS : Google File System Ömer Faruk İnce Fatih University - Computer Engineering Cloud Computing 25.03.2014."— Presentation transcript:

1 GFS : Google File System Ömer Faruk İnce Fatih University - Computer Engineering Cloud Computing 25.03.2014

2 Overview What is the GFS? Why is the GFS designed for? Design Overview – Assumptions – Interfaces – Architecture – Master Server – Chunk Server – Metadata System Interactions Master Operations – Garbage Collections – Fault Tolerance – Data Integrity System Interactions Conclusions

3 What is the GFS ? The GFS is designed for meeting the rapidly growing demands of Google Data processing needs and GFS shares many of the same aims as previous distributed file systems such as performance, scalability, reliability and avaliability.

4 Why is the GFS designed for ? Google needed a good distributed file system. Redundant storage of massive amounts of data on cheap and unreliable computers. Why not Google use an existing file system? Google’s problems are different from anyone else’s Different workload and design priorities GFS is designed for Google apps and workloads. Google apps are designed for GFS.

5 Assumptions System built from many inexpensive commodity components. System stores modest number of large files. – Few million, each typically 100 MB or larger in size. Multi-GB common. – Small files must be supported, but need not optimize. Workload is primarily: – Large streaming reads – Small random reads – Many large sequential appends. Must efficiently implement concurrent, atomic appends. – Producer-consumer queues. – Many-way merging.

6 Interface GFS provides a familiar file system interface, though it does not implement a standard API. Files are organized hierarchically in directories and identified by pathnames. It supports the usual operations to create, delete, open, close, read, and write files. Moreover, GFS has snapshot and record append operations. Snapshot creates a copy of a file or a directory tree at low cost. Record append allows multiple clients to append data to the same file concurrently while guaranteeing the atomicity of each individual client’s append. It is useful for implementing multi-way merge results and producer consumer queues that many clients can simultaneously append to without additional locking.

7 Arhitecture A GFS cluster consists of; A single master, Multiple Chunkservers, Multiple Clients as shown in Figure-1

8 Chunk Size Files stored as chunks Fixed size (64MB) Chunk size is one of the key design parameters.

9 Metadata The master stores three major types of metadata:  File and chunk namespaces  Mapping from files to chunks  Locations of each chunk’s replicas All metadata is kept in the master’s memory. The first two types(namespaces and file to chunk mapping) are also kept persistent by logging mutations to an operation log stored on the master’s local disk and replicated on remote machines.

10 Master’s Responsibilities Metadata storage Namespace management/locking Periodic communication with chunkservers – give instructions, collect state, track cluster health Chunk creation, re-replication, rebalancing – balance space utilization and access speed – spread replicas across racks to reduce correlated failures – re-replicate data if redundancy falls below threshold – rebalance data to smooth out storage and request load

11 Master’s Responsibilities (2) Garbage Collection  simpler, more reliable than traditional file delete  master logs the deletion, renames the file to a hidden name  lazily garbage collects hidden files Stale replica deletion  detect “stale” replicas using chunk version numbers

12 System Interactions 1.The client asks the master which chunkserver holds the current lease for the chunk and the locations of the other replicas. 2.The master replies with the identity of the primary and the locations of the other (secondary) replicas. Client caches. 3.Client pre-pushes data to all replicas. 4.After all replicas acknowledge, client sends write request to primary. 5.Primary forwards write request to all replicas. 6.The secondaries all reply to the primary indicating that they have completed the operation. 7.Primary replies to client. Errors handled by retrying.

13 Read Algorithm 1. Application originates the read request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations.

14 Read Algorithm 4. Client picks a location and sends the request 5. Chunkserver sends requested data to the client 6. Client forwards the data to the application

15 Write Algorithm 1. Application originates the request 2. GFS client translates request and sends it to master 3. Master responds with chunk handle and replica locations

16 Write Algorithm 4.Client pushes write data to all locations. Data is stored in chunkserver’s internal buffers

17 Write Algorithm 5. Client sends write command to primary 6. Primary determines serial order for data instances in its buffer and writes the instances in that order to the chunk 7. Primary sends the serial order to the secondaries and tells them to perform the write

18 Atomic Record Append 1.Client pushes data to all replicas. 2.Sends request to primary. Primary check maximum size. – Within maximum size Append the data to its replica – Exceed maximum size Send error to client Retried on the next chunk 3.Replicas of the same chunk may contain different data possibly including duplicates of the same record in whole or in part. These are handled by the client. It only guarantees that the data is written at least once as an atomic unit.

19 Snapshot A snapshot is a copy of a system at a moment in time. – To quickly create branch copies of huge data sets. – To checkpoint the current state before experimenting with changes that can later be committed or rolled-back easily. When the master receives a snapshot request, it first revokes any outstanding leases on the chunks in the files it is about to snapshot.

20 Master Operations Namespace Management and Locking GFS logically represents its namespace as a lookup table mapping full pathnames to metadata. Need locking to prevent: – Two clients from trying to create the same file at the same time. – Changes to a directory tree during snapshotting. Solution: – Lock intervening directories in read mode. – Lock final file or directory in write mode. – For snapshot lock source and target in write mode.

21 Replica Management Maximize data reliability and availability Maximize bandwidth utilization – Need to spread chunk replicas across machines and racks

22 Creation, Re-replication and Rebalancing Replicas created for three reasons: – Chunk creation – Re-replication – Load balancing Creation – place new replicas on chunk servers with below-average disk space utilization. – Spread replicas across racks. Re-replication – re-replicates a chunk as soon as the number of available replicas falls below a user-specified goal. Rebalancing – Periodically examines distribution and moves replicas for better disks pace and load balancing.

23 Garbage Collection Storage reclaimed lazily by GC. File first renamed to a hidden name. Hidden files removes if more than three days old. When hidden file removed, in-memory metadata is removed. Regularly scans chunk namespace, identifying orphaned chunks. These are removed. Chunkservers periodically report chunks they have and the master replies with the identity of all chunks that are no longer present in the master’s metadata. The chunkserver is free to delete its replicas of such chunks.

24 Fault Tolerance High availability – Fast recovery – Master and Chunkservers restartable in a few seconds Chunk replication – Each chunk is replicated on multiple chunkservers on different tracks. Users can specify different levels for different parts of the file namespace. – default: 3 replicas. Shadow masters – Data integrity – Checksum every 64KB block in each chunk

25 Data Integrity Each chunkserver uses checksumming to detect corruption of stored data. Checksums are kept in memory. – Separate from data. On read error, error is reported to master. – Master will re-replicate the chunk. – Requestor read from other replicas

26 Performance Testing GFS cluster consisting of: – One master Two master replicas – 16 chunkservers – 16 clients Machines were: – Dual 1.4 GHz PIII – 2 GB of RAM – 2 80 GB 5400 RPM disks – 100 Mbps full-duplex Ethernet to switch – Servers to one switch, clients to another. Switches connected via gigabit Ethernet.

27 Reads N clients reading 4 MB region from 320 GB file set simultan- eously. Read rate slightly lower as clients go up due to probability reading from same chunkserver. 80% 75%

28 Writes N clients writing to N files simultaneously. Low write rate is due to delay in propagating data among replicas. Slow write is not major problem with aggregate write bandwidth to large clients. 50%

29 Record Appends N clients appending to a single file simultaneously. Append rate slightly lower as clients go up due to network congestion by different clients. Chunkserver network congestion is not major issue with large n clients appending to large shared files.

30 Conclusions GFS demonstrates how to support large-scale processing workloads on commodity hardware – design to tolerate frequent component failures – optimize for huge files that are mostly appended and read – feel free to relax and extend FS interface as required – go for simple solutions (e.g., single master) GFS has met Google’s storage needs, therefore good enough for them.

31 Thanks for listening…


Download ppt "GFS : Google File System Ömer Faruk İnce Fatih University - Computer Engineering Cloud Computing 25.03.2014."

Similar presentations


Ads by Google