Presentation on theme: "THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015."— Presentation transcript:
THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015
TYPES OF CLOUDS Public, Private, Hybrid Clouds Names do not necessarily dictate location Type may depend on whether temporary or permanent
SEND ME INTERESTING LINKS ABOUT CLOUDS
PAPER TO READ Read the paper on GFS: Evolution on Fast-Forward Also a link to a longer paper on GFS – original paper from 2003 I assume you are reading papers as specified in the class schedule
THE ORIGINAL GOOGLE FILE SYSTEM GFS
During the lecture, you should point out problems with GFS design decisions
COMMON GOALS OF GFS AND MOST DISTRIBUTED FILE SYSTEMS Performance Reliability Scalability Availability
GFS DESIGN CONSIDERATIONS Component failures are the norm rather than the exception. File System consists of hundreds or even thousands of storage machines built from inexpensive commodity parts. Files are Huge. Multi-GB Files are common. Each file typically contains many application objects such as web documents. Append, Append, Append. Most files are mutated by appending new data rather than overwriting existing data. Co-Designing Co-designing applications and file system API benefits overall system by increasing flexibility
GFS Why assume hardware failure is the norm? The amount of layers in a distributed system mean failure on any could contribute to data corruption. Application OS Memory Disk Network Physical connections Power It is cheaper to assume common failure on poor hardware and account for it, rather than invest in expensive hardware and still experience occasional failure. Quantity and quality of components guarantee some are not functional at any given time. Constant monitoring, error detection, fault tolerance, and automatic recovery must exist in GFS
INITIAL ASSUMPTIONS System built from inexpensive commodity components that fail Modest number of files: few million 100MB or larger files. Multi-GB files are common. Didn’t optimize for smaller files 2 kinds of reads: large streaming read (1MB) small random reads (batch and sort) Well-defined semantics: Master/slave, producer/ consumer and many-way merge. 1 producer per machine append to file. Atomic RW High sustained bandwidth chosen over low latency (difference?)
HIGH BANDWIDTH VERSUS LOW LATENCY Example: An airplane flying across the country filled with backup tapes has very high bandwidth because it gets all data at destination faster than any existing network However – each individual piece of data had high latency
INTERFACE GFS – familiar file system interface Files organized hierarchically in directories, path names Create, delete, open, close, read, write Snapshot and record append (allows multiple clients to append simultaneously) This means atomic read/writes – not transactions!
MASTER/SERVERS (SLAVES) Single master, multiple chunkservers Each file divided into fixed-size chunks of 64 MB Typical file system block size: 4KB Chunks stored by chunkservers on local disks as Linux files Immutable and globally unique 64 bit chunk handle (name or number) assigned at creation Size of namespace: 2 64 Size of storage capacity: 2 64 x 64 MB Regulated by storage capacity of the master
MASTER/SERVERS R or W chunk data specified by chunk handle and byte range Each chunk replicated on multiple chunkservers – default is 3 Why?
MASTER/SERVERS Master maintains all file system metadata – Namespace, access control info, mapping from files to chunks, location of chunks – Controls garbage collection of chunks – Reclaim physical storage when files are deleted – Communicates with each chunkserver through HeartBeat messages – Instructions and collect state – Clients interact with master for metadata, chunksevers do the rest, e.g. R/W on behalf of applications – No caching – For client working sets too large, simplified coherence For chunkserver – chunks already stored as local files, Linux caches MFU in memory
HEARTBEATS What do we gain from Heartbeats? Not only do we get the new state of a remote system, this also updates the master regarding failures. Any system that fails to respond to a Heartbeat message is assumed dead. This information allows the master to update his metadata accordingly. This also queues the Master to create more replicas of the lost data.
CLIENT Client translates offset in file into chunk index within file Send master request with file name/chunk index Master replies with chunk handle and location of replicas Client caches info using file name/chunk index as key Client sends request to one of the replicas (closest) Further reads of same chunk require no interaction Can ask for multiple chunks in same request
MASTER OPERATIONS Master executes all namespace operations Manages chunk replicas Makes placement decision Creates new chunks (and replicas) Coordinates various system-wide activities to keep chunks fully replicated Balance load Reclaim unused storage
Do you see any problems? Do you question any design decisions?
MASTER - JUSTIFICATION Single Master – Simplifies design Placement, replication decisions made with global knowledge Doesn’t R/W, so not a bottleneck Client asks master which chunkservers to contact
CHUNK SIZE - JUSTIFICATION 64 MB, larger than typical file system block size Replica stored as plain Linux file, extended as needed Lazy space allocation Reduces interaction of client with master R/W on same chunk only 1 request to master Mostly R/W large sequential files Likely to perform many operations on given chunk (keep persistent TCP connection) Reduces size of metadata stored on master
CHUNK PROBLEMS But – If small file – one chunk may be hot spot Can fix this with replication, stagger batch application start times
METADATA 3 types: File and chunk namespaces Mapping from files to chunks Location of each chunk’s replicas All metadata in memory of master First two types stored in logs for persistence (on master local disk and replicated remotely)
METADATA Instead of keeping track of chunk location info Poll – which chunkserver has which replica Master controls all chunk placement Disks may go bad, chunkserver errors, etc.
METADATA - JUSTIFICATION In memory –fast Periodically scans state garbage collect Re-replication if chunkserver failure Migration to load balance Master maintains < 64 B data for each 64 MB chunk File namespace < 64B To support larger file systems: Add extra RAM/Disks to the master (cheap)
CHUNK SIZE (AGAIN)- JUSTIFICATION 64 MB is large – think of typical size of Why Large Files? o METADATA! Every file in the system adds to the total overhead metadata that the system must store. More individual data means more data about the data is needed.
OPERATION LOG Historical record of critical metadata changes Provides logical time line of concurrent ops Log replicated on remote machines Flush record to disk locally and remotely Log kept small – checkpoint when > size Checkpoint in B-tree form New checkpoint built without delaying mutations (takes about 1 min for 2 M files) Only keep latest checkpoint and subsequent logs
SNAPSHOT Snapshot makes copy of file Used to create checkpoint or branch copies of huge data sets First revokes leases on chunks Newly created snapshot points to same chunks as source file After snapshot, client sends request to master to find lease holder Master give lease to new copy
SHADOW MASTER Master Replication Replicated for reliability Not mirrors, so may lag primary slightly (fractions of second) Shadow master read replica of operation log, applies same sequence of changes to data structures as the primary does
SHADOW MASTER If Master fails: Start shadow instantly Read-only access to file systems even when primary master down If machine or disk fails, monitor outside GFS starts new master with replicated log Clients only use canonical name of master
CREATION, RE-REPLICATION, REBALANCING Master creates chunk Place replicas on chunkservers with below-average disk utilization Limit number of recent creates per chunkserver New chunks may be hot Spread replicas across racks Re-replicate When number of replicas falls below goal Chunkserver unavailable, corrupted, etc. Replicate based on priority (fewest replicas) Master limits number of active clone ops
CREATION, RE-REPLICATION, REBALANCING Rebalance Periodically moves replicas for better disk space and load balancing Gradually fills up new chunkserver Removes replicas from chunkservers with below-average free space