Presentation on theme: "Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory."— Presentation transcript:
Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory in NFS Use remote procedure calls Traditional Network File Systems, like NFS, use a central server to provide the file system services. This work present an alternative, serverless network file system called xFS.
Limitations of Central Server Systems All read misses and disk writes go to the server – Performance Bottleneck Not scalable – Too many clients can hurt performance. Expensive to increase server hardware or add servers. Require server replication for high availability – increases cost and complexity. Also, latency to duplicate the data.
Serverless Network File Systems Increased Performance - distributes control processing and data storage among cooperating workstations. Scales easily to simplify system management. Fault tolerance through distributed RAID and Log structured file system. Migrates responsibility of failed components to other workstations.
Background RAID – Write portion of data to each disk. High performance (parallel accesses) Availability xFS uses RAID striping for files across a stripe group. Small writes hurt performance – must do parity update. Log-structured File System (LFS) Append-only file system. Leaves holes – need cleanup. Quick writes can be delayed to help RAID Helps recovery (checkpoints on disk)
Background (con’t) Multiprocessor Cache Consistency Statically divide physical memory evenly among processors. Each processor manages the cache consistency state for its own physical memory. xFS does this for files. The node storing the files keeps up with consistency. In xFS it is dynamic – files can be managed by different nodes.
Goals of xFS Provide a scalable way to subset storage servers into groups to provide efficient storage. Scalable, distributed metadata and cache consistency management. Flexibility to dynamically reconfigure responsibilities after failures.
System entities Clients – want to access data in the system Storage Servers – store the system’s files Metadata Managers – hold cache consistency state and disk location metadata. Cleaners – clean up the LFS after writes Entities may lie on the same system or on different systems.
Serverless File Service “Anything, Anywhere” – all data and metadata can be located on and move to any node in the system. File access is faster because they are distributed across multiple workstations! How does the system locate the data? Key maps: manager map, imap, file directories, stripe group maps.
Manager Map Table indicating which machines manage which file indices File indices listed in the parent directory file. Globally Replicated Updated dynamically On machine failure or reconfiguration of file managers Can work as a load balancing mechanism. Not yet implemented, but a possibility
Imap Imaps are held by a file’s manager Maps a file’s index number to the disk address of the index node (inode). The index node gives the file offset and pointers to each data block. Similar to standard OS implementation
RAID Stripe Groups Better to stripe files over a group of servers instead of all servers in the system. Improves availability – Each group stores its own parity. Allows recovery from multiple failures. Stripe Group Map – tells which nodes are a member of the group. Must reference this map before reading or writing data to the file system.
Cache Consistency Token-based scheme Client must request and acquire write ownership from the file’s manager. Manager invalidates other cached copies
Cache Consistency (con’t) Client keeps write ownership until another client requests it. It then must flush the changes to the disk. xFS guarantees that the up-to-date copy is given to the node requesting the data. Traditional network file systems do not always guarantee this.
Management Distribution Policies xFS tries to assign files used by a client to a manager co-located on that machine. When a client creates a file, xFS assigns the manager on that machine to the file. Improves locality Reduces network hops to satisfy requests - 40%
Reconfiguration Not yet implemented in this version… When system detects configuration change, a global consensus algorithm is envoked. Leader is chosen to run the algorithm given a list of active nodes. Generates a new manager map and distributes it across the nodes.
Security in xFS Only appropriate in a restricted environment Machines cooperating over a fast network Must trust one another’s kernels to enforce security