Presentation is loading. Please wait.

Presentation is loading. Please wait.

Problem-solving on large-scale clusters: theory and applications Lecture 4: GFS & Course Wrap-up.

Similar presentations


Presentation on theme: "Problem-solving on large-scale clusters: theory and applications Lecture 4: GFS & Course Wrap-up."— Presentation transcript:

1 Problem-solving on large-scale clusters: theory and applications Lecture 4: GFS & Course Wrap-up

2 Today’s Outline File Systems Overview –Uses –Common distributed file systems GFS –Goals –System design –Consistency model –Performance Introduction to Distributed Systems –MapReduce as a distributed system

3 File System Uses Uses: –Persistence of data –Inter Process Communication –Shared Namespace of Objects Requirements: –Lots of files –Random Access –Permissions –Consistency

4 Common Distributed FSs Traditional: –NFS –SMB –AFS What about: –Kazaa –BitTorrent –Keberos

5 GFS Goals Common usage and environment patterns: –Regular Component Failure –Few, Large, Multi-GB files –Reads are streaming –Writes are appends –Control over client implementation What are the consequences on… –data caching? –latency and bandwidth balance? –consistency? –control of data flow?

6 GFS System Design System attributes and components: –Single master controls file namespace –Data broken in 64Meg “Chunks” –Chunks replicated over many “Chunk Servers” –Client talks directly to chunk servers for data From GFS paper

7 GFS Write flow Control flow 1.Client get chunk list from master 2.Master responds with primary/secondary chunk servers 3.Client starts pipelining data to chunk servers 4.Client asks primary chunk servers to notify finish 5.Primary chunk server signals write ordering to replicated servers 6.Chunk servers responds with successful commit 7.Client notified of good write From GFS paper

8 Consequence of design Questions: 1.Where are the bottlenecks? 2.What if a replica fails? 3.What if the primary fails? 4.What if the master fails? 5.Why do writes need to be ordered? From GFS paper How do you work around these issues?

9 GFS Consistency: terms Two new terms Consistent: All chunk servers have the same data Defined: The result of the “last write” is fully available A defined chunk is also a consistent chunk Questions: –Is data corrupt if it is inconsistent? –Is data corrupt if it is undefined? –Can applications use data in either state?

10 GFS Consistency: mutations Consistency for types of writes: –Single random write Consistent and Defined –Single append Consistent and Defined –Concurrent random write Consistent –Aborted write Inconsistent –Concurrent append Final location is defined

11 GFS Single Master handling Single Master = bottleneck = SPF 1.Master persists changes to multiple replicas 2.Can delegate to “shadow masters” 3.Naming done via DNS (easy failover) How does this work: 1.If the network is partitioned? 2.Over multiple data centers?

12 GFS Performance Here’s some random perf numbers –1MB replicates in about 80ms –342 node cluster 72 TB Avail, 55 Used 735 K Files, 22 K dead files, 992 K Chunks 13GB Chunk server Meta data 48MB Master Meta data Read rate: ~580 MB/s Write rate: ~2 MB/s Master ops: ~320 Op/s 100Mbit full duplex with Gigabit backbone ` Data from gfs paper

13 GFS Performance Cont’d More random perf numbers –227 node cluster 180 TB Avail, 155 Used 737 K Files, 232 K dead files, 1550 K Chunks 21GB Chunk server Meta data 60MB Master Meta data Read rate: ~380 MB/s Write rate: ~100 MB/s Master ops: ~500 Ops/s Data from gfs paper

14 And now … An overview of the concepts we’ve been alluding to all quarter: parallel and distributed systems

15 Parallel vs Distributed Computing Parallel computing –Dividing a problem into identical tasks to be executed at the same time on multiple machines or threads Distributed computing –Dividing a problem into (possibly identical) tasks to be executed on multiple machines or threads, but generally on machines separated by a network Parallel computing is often a limited form of distributed computing Is the MapReduce programming model parallel or distributed?

16 Requirements: Parallel Computing Requirement: minimal (to no) data dependencies! –Also nice: static data source Nice to have: minimal communication overhead –Replicating state –Coordinating / scheduling tasks (eg, administrative overhead)

17 Requirements: Distributed System See worksheet / activity

18 Distributed System Design (1 of 2) From studying large (but not necessarily distributed) systems, we know that distributed systems trade off one guarantee for another –Ergo, you need to know what you’re designing for, eg use-cases From studying MapReduce, we know that successful distributed system minimize data dependencies and administrative communication

19 Distributed System Design (2 of 2) From MapReduce & GFS, we know that a distributed system must assume that components fail –Network may not be reliable Latency can be high Bandwidth is not infinite Data may get corrupted –Attacks from malicious users –Hardware failures: motherboards, disks, and memory chips will fail –… etc …


Download ppt "Problem-solving on large-scale clusters: theory and applications Lecture 4: GFS & Course Wrap-up."

Similar presentations


Ads by Google