Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed File Systems

Similar presentations


Presentation on theme: "Distributed File Systems"— Presentation transcript:

1 Distributed File Systems
CMPT 431

2 Outline Overview of a distributed file system
DFS design considerations DFS Usage patterns that drive design choices Case studies AFS NFS

3 A Distributed File System
access files client server file sharing replication network

4 How To Design A DFS? Some design considerations:
Providing location transparency Stateful or stateless server? Failure handling Caching Cache consistency

5 Location Transparency
Location transparency: the client is unaware of the server location How do you name a remote file? Is the name structure different from the local file? Do you include the name of the server in the file name? Existing file systems have used both designs Pros of location transparency: The client needs not be changed if the server name changes Facilitates replication of file service Cons of location transparency The client cannot specify the server of choice (i.e., to recover from failures or to get better performance)

6 Failures Server crash (failstop failures):
All data that’s in the server memory can be lost The server may write to the local disk and send acknowledgement to the client, but the data might not have made it to disk, because it was still in the OS buffer or in disk buffer Message loss (omission failures): Usually taken care of by the underlying communication protocol (i.e., RPC)

7 Stateless vs. Stateful Server
Stateless server: loses state in the event of a crash A stateless server looks to the client like a slow server Simple server design Quick recovery after reboot Client needs to maintain state: if the server crashed, the client does not know if the operation has succeeded, so it must retry File operations must be idempotent Stateful server: remembers its state and recovers Complex server design yet simple client design Longer server recovery time Permits non-idempotent operations (i.e. file append)

8 Caching File access exhibit temporal locality
If a file has been accessed it will likely be accessed soon again Caching: keeping a recently access copy of a file (or part of a file) close to where it is accessed Caching can be done at various point in DFS: Caching in client’s memory Caching on the client’s disk Caching in server memory Caching inside the network (proxy caches) Caches anticipate future accesses by doing a read-ahead

9 Caching and Consistency
Consistency is about ensuring that the copy of the file is up-to-date Caching makes consistency an issue: the file might be modified in the cache, but not at its true source DFS should maintain consistency to prevent data loss When a client modifies the file in its cache, the client copy becomes different from server copy We care about consistency, because if the client crashes, the data will be lost DFS should maintain consistency to facilitate file sharing Client A and B share a file Client A modifies the file in its local cache; Client B sees outdated copy of the file

10 Consistency Protocols
Consistency protocol determines when the modified (dirty) data is propagated to its source Write-through: instant propagation A client propagates dirty data to the server as soon as the data is written Reduces the risk of data loss on a crash May result in a large number of protocol messages Write-back: delayed propagation (or lazy propagation) A client propagates dirty file data when the file is closed or after a delay (i.e., 30 seconds) Higher risk of data loss Smaller number of protocol messages

11 Consistency Protocols and File Sharing
When files are shared among the clients, one client may modify the data, causing the data to become inconsistent Data consistency should be validated Approaches: client validation, server validation Client validation: Client contacts the server for validation Server validation: Server notifies the client when the cached data is stale

12 Granularity of Data Access
Block granularity: the file is transferred block by block If you use only a small part of the file, you don’t waste time transferring the whole file Cache consistency is done on block-by-block access: consistency protocol may generate many messages File granularity: the file is transferred as a whole If you have a large file and you do not use the entire file, you waste resources by transferring the entire file Cache consistency is done on the whole-file granularity – so there’s fewer consistency messages

13 How to Design a Good DFS? There many design considerations
There are many design choices Design considerations should be driven by usage patterns, i.e., how clients use the file system So, we will look at some common usage patterns Let these usage patterns drive our design choices Exercise on DFS design. Pay attention to: Access transparency Access granularity Caching: where is it done, if at all? Update propagation Ensuring consistency: server vs. client validation

14 DFS Usage Patterns Most files are small
Read operations are much more frequent than write operations Most accesses are sequential, random access is rare Files are usually read in their entirety

15 DFS Usage Patterns (cont)
Data in files tends to be overwritten often Most files are read and written by one user When users share a file, typically only one user modifies the file Fine-grained read/write sharing is rare (in research/academic environments) File references show substantial temporal locality

16 Usage patterns drive the design
Designing a Good DFS DFS Usage patterns Temporal locality Most files are small Little read/write sharing Files are accessed in its entirety Design Considerations Stateless/stateful Caching Cache consistency File sharing Data transfer granularity Usage patterns drive the design Good DFS Design

17 Exercise in DFS Design Usage pattern #1: Files are accessed in its entirety Block or Whole File Transfer Granularity? Whole file transfer Usage pattern #2: Most files are small Block or Whole File Granularity? Either whole file transfer or block transfer (small files will fit in a single block)

18 Exercise in DFS Design (cont.)
Usage pattern #3: Read operations are more frequent than write operations Client or server caching? Cache data on the client

19 Exercise in DFS Design (cont.)
Usage pattern #4: Data in files tends to be overwritten often Instant or delayed propagation of writes? Delayed propagation of dirty data Usage pattern #5: Most files are read/written by one user No need for instant propagation

20 Exercise in DFS Design (cont.)
Usage pattern #6: Fine-grained read-write sharing is rare Instant or delayed propagation of writes? No need for instant propagation Usage pattern #7: Fine-grained read-write sharing is rare Server or client validation? Client validation

21 Outline Overview of a distributed file system
DFS design considerations DFS Usage patterns that drive design choices Case studies AFS NFS

22 Introduction to AFS Andrew File System
Design started in 1983 as a joint project between Carnegie Mellon University (CMU) and IBM Design team included prominent scientists in the area of distributed systems: M. Satyanarayanan, John Howard Goal of AFS: scalability, support for large number of users AFS is widely used all over the world

23 Overview of AFS Venus – FS client Vice – FS server
© Pearson Education 2001 Venus – FS client Vice – FS server Unix compatibility – access and location transparency

24 Location and Access Transparency in AFS
Location transparency Access transparency © Pearson Education 2001 © Pearson Education 2001

25 Key Features of AFS Caching of files on the client’s local disk
Whole file transfer Whole file caching Delayed propagation of updates (on file close) Server validation

26 Whole File Transfer and Caching
When the client opens the file, the whole file is transferred from server to client Venus writes a copy of the file to the local disk Cache state survives client crashes and reboots This reduces the number of server accesses and answers the requirement of scalability

27 Whole File Transfer and Caching vs. Usage Patterns
How is the decision to transfer and cache whole files driven by usage patterns? Most files are small Most files are accessed in their entirety – so it pays to transfer the whole file File accesses exhibit high temporal locality, so caching makes sense

28 Delayed Update Propagation
Delayed update propagation: client propagates file updates on close Server invalidates cached file copies on other clients On opening a file, the client asks from the server a callback promise – the server promises to tell the client when the file is updated Callback promise can be: valid – means that the cached copy is valid cancelled – means that the cached copy is invalid When a client propagates changes to the file, the server sends callbacks to all client who hold callback promises The clients set their callback promises to “cancelled” The server must remember client to whom it promised callbacks – the server is stateful

29 Delayed Update Propagation vs. Usage Patterns
Most files are read and written by one client – delayed update propagation is acceptable Fine-grained data sharing is rare – delayed update propagation is acceptable Note: callback cancellation message can be lost due to server or client crash In this case the client will see an inconsistent copy of data Such weak consistency semantics were deemed acceptable due to infrequent fine-grained sharing

30 Server Propagation and Scalability
In AFS 1 client asked the server if the file was still valid on every open This generated lots of client/server traffic and limited system scalability In AFS 2 it was decided to use the callback system

31 Client Opens a File User process open file UNIX kernel
If file is shared, pass request to Venus Venus Check if file is in the cache. If it is and if callback promise is valid, open the local copy and return the file descriptor. If not, ask the file from the server. Place a file in the local disk cache, open it and return the file descriptor to the user process. Vice Transfer file to Venus

32 Client Reads a File User process read file UNIX kernel
perform a normal Unix read operation on the local copy Venus Vice

33 Client Writes a File User process write file UNIX kernel
perform a normal Unix write operation on the local copy Venus Vice

34 Client Closes a File User process write file UNIX kernel
close a local copy and notify Venus that the file has been closed Venus If the file has been changed, send a copy to the Vice server Vice Replace the file contents and send a callback to all other clients holding callback promises on the file

35 Summary: AFS Goal is scalability: this drove the design
Whole file caching reduces the number of server accesses Server validation (based on callbacks) reduces the number of server accesses Callbacks require keeping state on the server (the server has to remember the list of clients with callback promises) Client is also stateful in some sense (client cache state survives crashes)

36 Introduction to NFS NFS – Network File System
In widespread use in many organizations Developed by Sun, implemented over Sun RPC, can use either TCP or UDP Key features: Access and location transparency (even inside the kernel!) Block-granularity of file access and caching Delayed update propagation Client validation Stateless server Weak consistency semantics

37 Access Transparency in the NFS
© Pearson Education 2001 VFS is a software layer that redirects file-related system calls to the right file system (such as NFS) VFS provides access transparency at user level and inside the kernel

38 VFS and vnodes VFS – virtual file system
A layer of software in the kernel Contains: a set of vnode data structures vnode represent files and directories A vnode contains (among other things): Name of the file Function pointers that should be called when this file is operated upon A vnode representing an NFS file is set up to call into the NFS client The NFS client then redirects file operations to the server

39 Location Transparency in NFS
© Pearson Education 2001 Client sees the file directory structure that looks the same as for a local file system A remote directory is mounted at a mount point A mount point is the name of local directory that is mirrored remotely Mounting sets up the directory vnode to call into NFS client on operations on that directory

40 Hard vs Soft Mounts Hard mount: Soft mount:
When the server crashes, the client blocks, waiting for the server to start responding Soft mount: NFS client times out, returns error to the application Most applications are not written to handle file access errors As a result, NFS systems usually use hard mounts

41 Block Granularity of File Access and Caching
NFS uses VFS’s cache VFS caching is on block granularity, so NFS files are accessed and cached on a block granularity Typical block size is 8KB Files are cached on the client only in memory, not on disk Unlike in AFS, cache state does not survive client crashes

42 Block Granularity? But Most Files Are Accessed In Their Entirety!
NFS does pre-fetching to anticipate future accesses from the client (recall: most files are accessed sequentially) Pre-fetching is modest at start (pre-fetch 1 or 2 blocks at a time) But it gets more aggressive if the client shows sequential access patterns, i.e., if pre-fetched data is actually used So why doesn’t NFS do whole file caching, like AFS? Block granularity caching allows for access transparency within the kernel and unified VFS cache management Unified VFS cache management facilitates more efficient use of memory

43 Client Caching in NFS Delayed propagation
Updates to cached file blocks are not propagated to the server immediately; they are propagated when: A file is closed An application calls “sync” A flush daemon writes the dirty data back to the server Therefore, clients accessing the same file may see inconsistent copies To maintain consistency, NFS uses client polling

44 Consistency and Client Polling
Polling is based on two timestamps Tc – the timestamp when the cache entry was last validated Tm – the timestamp when the data was last modified on the server Client polls the server at interval t (between 3 and 30 sec.) A cache entry is valid at time T if: T-Tc > t (fewer than t seconds elapsed since last validation), or Tm_client = Tm_server (data has not been modified on the server since the client requested it)

45 Reducing Polling Overhead
Client polling can result in significant overhead (this is why it was decided not to use it in AFS) Measures to reduce polling overhead in NFS: When a client receives new Tm value from the server, it applies it to all blocks from the same file Tm for file F is piggybacked on all server responses for all operations on file F Polling interval t is set adaptively for each file, depending on the frequency of updates to that file t for directories is larger: seconds

46 Server Caching and Failure Modes
In addition to client caching there is caching on the server NFS server caches files in its local VFS cache Writes to the local file system use delayed propagation: Data is written to the VFS cache Flush daemon flushes dirty data to disk every 30 seconds or so How should NFS client/server interoperate in face of server caching?

47 NFS Client/Server Interaction in the Presence of Server Caching
Option #1: Client writes the data to the server. Server sends acknowledgement to the client after writing the data to the local file cache, not to disk Advantage: Fast response to the client Disadvantage: If the server crashes before the data is written to disk, the data is lost. Option #2: Client writes the data to the server. Server syncs the data to the disk, then responds to client Advantage: The data is not lost if the server crashes Disadvantage: Each write operation takes longer to complete, this can limit server’s scalability

48 NFS Client/Server Interaction in the Presence of Server Caching (cont)
Older versions of NFS used write-through (option #2) This was recognized as performance problem NFSv3 introduced a commit operation Client can ask the server to flush the data to disk by sending the commit operation

49 NFS Statelessness NFS server is stateless
It forgets its state when it crashes. Local FS recovers local file system state if it was made inconsistent because of a crash The client keeps retrying the operations until the server reboots All file operations must be idempotent Reading/writing file blocks is idempotent Creating/deleting files is idempotent – the server’s local FS will not allow to create/delete the same file twice

50 NFS vs. AFS Access granularity: Server statefulness: Client caching:
AFS: whole file NFS: block Server statefulness: AFS: stateful NFS: stateless Client caching: AFS: whole file, on local disk NFS: blocks, in memory only

51 NFS vs. AFS (cont.) Cache validation Delay propagation
AFS: server validation with callbacks NFS: client validation using polling Delay propagation AFS: on file close NFS: on file close or on sync or by flush daemon Consistency semantics AFS: weak consistency NFS: weak consistency

52 Comparing Scalability: NFS vs. AFS

53 Explanation of Results
This benchmark performed operations as: Scan the directory – read attributes of every file in a recursive directory traversal Read lots of files from the server (every byte of every file) Make (compile a large number of files residing on remote server) AFS scaled better – its performance degraded at a smaller rate as the load increased NFS performance suffered during directory scans, reading many files and compilation – these actions involve many “file open” operations NFS needs to check with server on each “file open” operation, while AFS does not

54 Which Is Better: NFS or AFS?
AFS showed greater scalability But NFS is in widespread use Why? There are good things about NFS too NFS is stateless, so server design is simpler AFS requires caching on local disk – this takes space on the client’s disk Networks have become much faster. What if we re-evaluated NFS/AFS comparison now? Would AFS still scale better?

55 Weak Consistency Semantics on DFS?
Both AFS and NFS provide weak consistency semantics for cached files This makes fine-grained file sharing impossible Isn’t this a bad design decision? What do you think? What guided this design decision: Most applications do not perform fine-grained sharing Supporting strong consistency is hard It’s a bad idea to optimize the system for the uncommon case, especially if the optimization is so hard to implement For fine-grained sharing users should use database systems

56 Summary Considerations in DFS design:
Access granularity (whole-file vs. block) Client caching (disk or memory, whole-file or block) How to provide access and location transparency Update propagation (immediate vs. delayed) Validation (client polling vs. server callbacks)

57 Summary (cont) Usage patterns that drive design choices:
Most files are small Most files are accessed in their entirety Most accesses are sequential, random access is rare File references exhibit strong temporal locality Most files are read and written by one user Fine-grained file sharing is rare Users are comfortable with weak consistency semantics

58 Other DFS Architectures (preview)
There are many other DFS architectures Some DFS allow replication in face of concurrent updates: multiple client write data at the same time and the data is kept consistent on multiple servers Some DFS allow automatic failover: when one server fails the other one automatically starts serving files to client Some DFS allow disconnected operation Some DFS are designed to operated in low network bandwidth conditions There are file systems with transactional semantics (and you will design one!) There are serverless (peer-to-peer) file systems We will look at some of these DFS in future lectures


Download ppt "Distributed File Systems"

Similar presentations


Ads by Google