Presentation on theme: "Question Scalability vs Elasticity What is the difference?"— Presentation transcript:
1 QuestionScalability vs ElasticityWhat is the difference?
2 Homework 1 Installing the open source cloud Eucalyptus in SEC3429 Individual assignmentWill need two machines – machine to help with installation and machine on which to install cloud so BRING YOUR LAPTOPGuide to help you – step by step, but you will also need to use Eucalyptus Installation GuideWhen you are done you will have a cloud with a VM instance running on itCan use for future work and if not, can say you have installed a cloud and VM image
3 Components of Eucalyptus CLC – cloud controllerWalrus – Amazon’s S3 for storing VM imagesSC – storage controllerCC – cluster controllersNC – node controllers
4 Cloud Controller - CLCJava program (EC2 compatible interface) and web interfaceAdministrative interface for cloud managementResource schedulingAuthentication, accounting, reportingOnly one CLC per cloud
5 Walrus Written in Java (equivalent to AWS Simple Storage Service S3) Persistent storage toall VMsVM ImagesApplication dataVolume snapshots (point-in-time copies)Can be used as put/get storage as a serviceOnly one Walrus per cloudWhy is it called Walrus – WS3?
8 Cluster Controller - CC Written in CFront-end for a cluster within the cloudCommunicates with Storage Controller and Node ControllerManages VM instance execution and SLAs per cluster
9 Availability Zones Each cluster exists in an availability zone A cloud can have multiple locationsWithin each location is a regionEach region has multiple isolated locations which are called availability zonesAvailability zones are connected through low-latency links
10 Storage Controller - SC Written in Java (equivalent to AWS Elastic Block Store EBS)Communicates with CC and NCManages Eucalyptus block volumes and snapshots of instances within clusterIf larger storage needed for application
11 Node Controller - NC Written in C Hosts VM instances – where they run Manages virtual network endpointsDownloads, caches images from WalrusCreates and caches instancesMany NCs per cluster
12 Interesting Info on Clouds What Americans think a compute cloud is
14 Homework Read the paper on GFS: Evolution on Fast-Forward Also a link to a longer paper on GFS – original paper from 2003I assume you are reading papers as specified in the class schedule
15 The Original Google File System GFS Some slides from Michael Raines
16 During the lecture, you should point out problems with GFS design decisions
17 Common Goals of GFS and most Distributed File Systems PerformanceReliabilityScalabilityAvailability
18 GFS Design Considerations Component failures are the norm rather than the exception.File System consists of hundreds or even thousands of storage machines built from inexpensive commodity parts.Files are Huge. Multi-GB Files are common.Each file typically contains many application objects such as web documents.Append, Append, Append.Most files are mutated by appending new data rather than overwriting existing data.Co-DesigningCo-designing applications and file system API benefits overall system by increasing flexibility
19 GFS Why assume hardware failure is the norm? The amount of layers in a distributed system (network, disk, memory, physical connections, power, OS, application) mean failure on any could contribute to data corruption.It is cheaper to assume common failure on poor hardware and account for it, rather than invest in expensive hardware and still experience occasional failure.
20 Initial AssumptionsSystem built from inexpensive commodity components that failModest number of files – expect few million and 100MB or larger. Didn’t optimize for smaller files2 kinds of reads – large streaming read (1MB), small random reads (batch and sort)Well-defined semantics:Master/slave, producer/ consumer and many-way merge. 1 producer per machine append to file.Atomic RWHigh sustained bandwidth chosen over low latency (difference?)
21 High bandwidth versus low latency Example:An airplane flying across the country filled with backup tapes has very high bandwidth because it gets all data at destination faster than any existing networkHowever – each individual piece of data had high latency
22 Interface GFS – familiar file system interface Files organized hierarchically in directories, path namesCreate, delete, open, close, read, writeSnapshot and record append (allows multiple clients to append simultaneously)This means atomic read/writes – not transactions!
23 Master/Servers (Slaves) Single master, multiple chunkserversEach file divided into fixed-size chunks of 64 MBChunks stored by chunkservers on local disks as Linux filesImmutable and globally unique 64 bit chunk handle (name or number) assigned at creation
25 Master/ServersR or W chunk data specified by chunk handle and byte rangeEach chunk replicated on multiple chunkservers – default is 3
26 Master/Servers Master maintains all file system metadata Namespace, access control info, mapping from files to chunks, location of chunksControls garbage collection of chunksCommunicates with each chunkserver through HeartBeat messagesClients interact with master for metadata, chunksevers do the rest, e.g. R/W on behalf of applicationsNo caching –For client working sets too large, simplified coherenceFor chunkserver – chunks already stored as local files, Linux caches MFU in memory
27 Heartbeats What do we gain from Heartbeats? Not only do we get the new state of a remote system, this also updates the master regarding failures.Any system that fails to respond to a Heartbeat message is assumed dead. This information allows the master to update his metadata accordingly.This also queues the Master to create more replicas of the lost data.
28 Client Client translates offset in file into chunk index within file Send master request with file name/chunk indexMaster replies with chunk handle and location of replicasClient caches info using file name/chunk index as keyClient sends request to one of the replicas (closest)Further reads of same chunk require no interactionCan ask for multiple chunks in same request
29 Master Operations Master executes all namespace operations Manages chunk replicasMakes placement decisionCreates new chunks (and replicas)Coordinates various system-wide activities to keep chunks fully replicatedBalance loadReclaim unused storage
30 Do you see any problems?Do you question any design decisions?
31 Master - Justification Single Master –Simplifies designPlacement, replication decisions made with global knowledgeDoesn’t R/W, so not a bottleneckClient asks master which chunkservers to contact
32 Chunk Size - Justification 64 MB, larger than typicalReplica stored as plain Linux file, extended as neededLazy space allocationReduces interaction of client with masterR/W on same chunk only 1 request to masterMostly R/W large sequential filesLikely to perform many operations on given chunk (keep persistent TCP connection)Reduces size of metadata stored on master
33 Chunk problems But – If small file – one chunk may be hot spot Can fix this with replication, stagger batch application start times
35 Metadata 3 types: File and chunk namespaces Mapping from files to chunksLocation of each chunk’s replicasAll metadata in memoryFirst two types storedin logs for persistence(on master local disk andreplicated remotely)
36 Metadata Instead of keeping track of chunk location info Poll – which chunkserver has which replicaMaster controls all chunk placementDisks may go bad, chunkserver errors, etc.
37 Metadata - Justification In memory –fastPeriodically scans stategarbage collectRe-replication if chunkserver failureMigration to load balanceMaster maintains < 64 B data for each 64 MB chunkFile namespace < 64B
38 Chunk size (again)- Justification 64 MB is large – think of typical size ofWhy Large Files?METADATA!Every file in the system adds to the total overhead metadata that the system must store.More individual data means more data about the data is needed.
39 Operation Log Historical record of critical metadata changes Provides logical time line of concurrent opsLog replicated on remote machinesFlush record to disk locally and remotelyLog kept small – checkpoint when > sizeCheckpoint in B-tree formNew checkpoint built without delaying mutations (takes about 1 min for 2 M files)Only keep latest checkpoint and subsequent logs
40 Snapshot Snapshot makes copy of file Used to create checkpoint or branch copies of huge data setsFirst revokes leases on chunksNewly created snapshot points to same chunks as source fileAfter snapshot, client sends request to master to find lease holderMaster give lease to new copy
41 Shadow Master Master Replication Replicated for reliabilityNot mirrors, so may lag primary slightly (fractions of second)Shadow master read replica of operation log, applies same sequence of changes to data structures as the primary does
42 Shadow Master If Master fails: Start shadow instantlyRead-only access to file systems even when primary master downIf machine or disk mails, monitor outside GFS starts new master with replicated logClients only use canonical name of master
43 Creation, Re-replication, Rebalancing Master creates chunkPlace replicas on chunkservers with below-average disk utilizationLimit number of recent creates per chunkserverNew chunks may be hotSpread replicas across racksRe-replicateWhen number of replicas falls below goalChunkserver unavailable, corrupted, etc.Replicate based on priority (fewest replicas)Master limits number of active clone ops
44 Creation, Re-replication, Rebalancing RebalancePeriodically moves replicas for better disk space and load balancingGradually fills up new chunkserverRemoves replicas from chunkservers with below-average free space
45 Leases and Mutation Order Chunk leaseOne replica chosen as primary - given leasePrimary picks serial order for all mutations to chunkLease expires after 60 s
46 Consistency Model Why Append Only? Overwriting existing data is not state safe.We cannot read data while it is being modified.A customized ("Atomized") append is implemented by the system that allows for concurrent read/write, write/write, and read/write/write events.
48 Consistency Model File namespace mutation (update) atomic File Region Consistent if all clients see same dataRegion – defined after file data mutation (all clients see writes in entirety, no interference from writes)Undefined but Consistent - concurrent successful mutations – all clients see same data, but not reflect what any one mutation has written, fragments of updatesInconsistent – if failed mutation (retries)
49 ConsistencyRelaxed consistency can be accommodated – relying on appends instead of overwritesAppending more efficient/resilient to failure than random writesCheckpointing allows restart incrementally and no processing of incomplete successfully written data
50 Namespace Management and Locking Master ops can take time, e.g. revoking leasesallow multiple ops at same time, use locks over regions for serializationGFS does not have per directory data structure listing all filesInstead lookup table mapping full pathnames to metadataEach name in tree has R/W lockIf accessing: /d1/d2/ ../dn/leaf, R lock on /d1, /d1/d2, etc., W lock on /d1/d2 …/leaf
51 Locking Allows concurrent mutations in same directory R lock on directory name prevents directory from being deleted, renamed or snapshottedW locks on file names serialize attempts to create file with same name twiceR/W objects allocated lazily, delete when not in useLocks acquired in total order (by level in tree) prevents deadlocks
52 Fault Tolerance Fast Recovery Chunk Replication Master/chunkservers restore state and start in seconds regardless of how terminatedAbnormal or normalChunk Replication
53 Data Integrity Checksumming to detect corruption of stored data Impractical to compare replicas across chunkservers to detect corruptionDivergent replicas may be legalChunk divided into 64KB blocks, each with 32 bit checksumsChecksums stored in memory and persistently with logging
54 Data Integrity Before read, checksum If problem, return error to requestor and reports to masterRequestor reads from replica, master clones chunk from other replica, delete bad replicaMost reads span multiple blocks, checksum small part of itChecksum lookups done without I/OChecksum computation optimized for appendsIf partial corrupted, will detect with next readDuring idle, chunkservers scan and verify inactive chunks
55 Garbage Collection Lazy at both file and chunk levels When delete file, file renamed to hidden name including delete timestampDuring regular scan of file namespacehidden files removed if existed > 3 daysUntil then can be undeletedWhen removed, in-memory metadata erasedOrphaned chunks identified and erasedWith HeartBeat message, chunkserver/master exchange info about files, master tells chunkserver about files it can delete, chunkserver free to delete
56 Garbage Collection Easy in GFS All chunks in file-to-chunk mappings of masterAll chunk replicas are Linux files under designated directories on each chunkserverEverything else garbage
57 ConclusionsGFS – qualities essential for large-scale data processing on commodity hardwareComponent failures the norm rather than exceptionOptimize for huge files appended toFault tolerance by constant monitoring, replication, fast/automatic recoveryHigh aggregate throughputSeparate file system controlLarge file size
58 GFS In the WildGoogle currently has multiple GFS clusters deployed for different purposes.The largest currently implemented systems have over 1000 storage nodes and over 300 TB of disk storage.These clusters are heavily accessed by hundreds of clients on distinct machines.Has Google made any adjustments?
61 OpenStack 3 components in the architecture Cloud Controller - Nova (compute)the cloud computing fabric controller,Written in Python, uses external librariesStorage Controller –SwiftAnalogous to AWS S3Can store billions of objects across nodesBuilt-in redundancyImage Controller – GlanceManages/stores VM imagesCan use local file system, OpenStack Object Store, S3