Presentation is loading. Please wait.

Presentation is loading. Please wait.

GFS.

Similar presentations


Presentation on theme: "GFS."— Presentation transcript:

1 GFS

2 Trends Component failures are the norm rather than the exception
Some are down, some are broken and will never be fixed. Files are huge by traditional standards Multiple GBs 100’s of large files Support small files but not optimize for this case. Files are mutated by appends rather than overwrites. Once written, files typically don’t change. Often just read and then its sequentially Often being read/written by many entitles at a time. Co-Design of applications & file system APIs Bandwidth is more important than latency. Mostly for offline processing

3 Architecture A single master and multiple chunkservers accessed by multiple clients. Clients and chunkservers may be on same machine. Files are divided into file-size chunks. Each chunk has a globally unique chunk handle id assigned by master. Chunkservers store chunks on local disks. Each chunk is replicated on multiple chunkservers (default replication is 3). Master maintains all metadata: access control, namespace, mapping from file to chunks and current locations of chunks. Also controls chunk lease management, garbage collection of orphaned chunks and migration between chunkservers. Periodically communicates to each chunkserver using heartbeat messages.

4 Clients Clients communicate with the master for metadata operations
Data-bearing communications happen with chunkservers. No POXSIX API support, do not hook into vnodes, inodes, etc. Neither client nor chunkserver caches file data. No cache coherence issues

5 Architecture

6 Single Master Having single master simplifies design enables more complex behavior. Need to ensure it doesn’t become a bottleneck. Reads/writes of data do not happen at master. Clients ask where to go for data. Chunk is read from chunkserver (this information can be cached).

7 Simple Read Example Client translates file name and byte offset into chunk index within file. (Chunk size is fixed). Sends master a request containing name & chunk index. Master replies with corresponding chunk handle and location of replicas. Client caches this information (filename/chunk index -> data) Client sends request to one of the replicas (closest). Request specifies chunk handle and byte range within chunk. Further reads of same chunk require no more master work. Client can ask for multiple chunks at once and master can reply with surrounding chunk information as well.

8 Chunk Size Chunk size is 64MB!
Each replica is plain linux file on chunkserver Lazy space allocation avoids wasting space to internal fragmentation. Large chunk size: (+) Reduces clients need to go to master. (+) Can keep persistent TCP connection to chunkserver (since larger more operations). (+) Reduces size of meta data (-) Might create hotspots Can be mitigated by higher replication rate for popular blocks.

9 Metadata Master Stores three types of meta data
File and chunk namespaces Mapping from file to chunks Location of each chunks replicas All kept in memory. (1) & (2) are kept persistent by logging mutations to an operation log Operation log kept on local disk and replicated out to remote mahines. Chunk locations are not persisted. At startup master asks each chunk server what it has.

10 Data Structures Keeping data structures in memory is fast
Can scan tables for: Garbage collection Re-replication (chunkserver failure) Chunk migration to balance load & disk But limited by amount of RAM available. Keeps 64 bytes per 64 MB chunk Can always add more RAM.

11 Chunk Locations Rather than keep persistent information in master, chunkservers have final say. Simplifies design. Not rare events: Chunkservers go offline, get renamed, restart, get added, etc.

12 Operation Log Contains updates to metadata changes.
Only persistent record of metadata Serves as logical timeline that defines order of concurrent operations. Metadata must be updated before change are visible to clients. Replication happens before respond to client requests (as successful). Recovery happens by replying log file. Master can also checkpoint its state to keep log file small. Recovery: load checkpoint, replay log relative from here. Master switches to new log file while checkpointing so new updates are written to new log and checkpoint built off of old log.

13 Consistency Model File Creation is handled by master.
Locking guarantees correctness. Operation log defines a global order over these operations A file is consistent if all clients will see the same data, regardless of which replica they access. A file region is defined if it is consistent and clients see the mutation in its entirety. Concurrent successful mutations leave a region undefined but consistent: Data is mingled from many writes. A failed mutation leave files region inconsistent: clients my see different data. Mutations may be writes or record appends.

14 Defined & Consistency After a sequence of successful mutations, the mutated file region is guaranteed to be defined and contain data written by last mutation. GFS guarantees this by: Applying mutations to a chunk in the same order on all mutations. Using chunk version numbers to detect any replica that has become stale because it has missed some mutations while its chunk-server was down. Stale replicas are not returned by the master when queried for chunk locations and garbage collected as soon as possible. Clients cache chunk locations but this time is limited by: Clients cache timeout. File open of the same file

15 Application Use Cases Writer generates a file from beginning to end using file appends. Checkpoints may include application-level checksums. Appending is far more efficient then writes. Writes can restart incrementally Many writers concurrently append to a file for merged results. Readers may deal with duplicates by using checksums. Or filter them out using unique identifiers within records.

16 Chunk Leases Master grants a chunk lease to one of the replicas, called the primary. The primary picks a serial order for all mutations to the chunk. All other replicas follow this ordering when applying updates. Mutation order is defined by lease grant order and by serial numbers assigned by primary. Leases timeout after 60seconds. Primary can request and receive extensions if mutations are ongoing. Requests/grants are piggybacked onto the heartbeat messages. Master may revoke a lease before expiration (for instance on file rename). If Master/Primary communication breaks down, can be re-assigned after it expires.

17 Control Flow Client asks the master which chunk server holds current lease for chunk and for location of other replicas. (May force assign lease). Master replies with identity of primary and secondary locations Cached for future mutations Only needs to contact master when primary lease expires Client pushes data to all replicas, in any order. Each chunkserver stores data in an LRU cache until data is used or aged out. Once all replicas have acknowledged receiving the data, client sends a write request to the primary. Primary assigns consecutive serial numbers to all the utations. The primary forwards the write request to all of the secondary replicas. Each replica must applut mutations in same order. Secondarries all reply indicating operation has been completed. Primary replies to client with indication of any failures, which will trigger retries of steps 3 -7.

18 Control & Data Flow Data is pushed linearly along through a chain of machines. Each CS pushes to closest neighbor measured by IP address. Data is pipelined so that it is forward as soon as it starts being received.

19 Master Operations Namespace locking
Snapshotting /home/user to /save/user will prevent /home/user/foo from being created. Acquires read locks on /home and /save and write locks on /home/user and /save/user. Creation of file /home/user/foo: acquires read locks on /home and /home/user and a write lock on /home/user/foo.

20 Chunk Placemen When a chunk is created, master chooses where to place it: (1) Place chunk on replicas with below-average disk space utilization. (2) Limit number of recent creations on each chunkserver: Creation typically indicates heavy write traffic. (3) Spread replicas across racks (physically dispersed to avoid loss of data if rack/server-farm/etc. goes dark).

21 Chunk Re-Replication Master re-replicates a chunk when number of replicas drops below threshold. Can be prioritized based on: How far below threshold chunk is. Live files over recently deleted files. Client requests for chunk. Master tells new replicas to copy over data. Limit number of active copies.

22 Master Garbage Collection
Deletion is logged immediately File is renamed to hidden name Can still be read. File only removed after k-days (some time). When hidden file is removed, all metadata is deleted. Heartbeat is used to coordinate cleanup of data on chunkservers.

23

24 GFS Client code GFS client breaks code into multiple write operations if covers multiple boundaries.

25


Download ppt "GFS."

Similar presentations


Ads by Google