Presentation is loading. Please wait.

Presentation is loading. Please wait.

Page 1 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Networks, Inc. www.spinnakernet.com 301.

Similar presentations


Presentation on theme: "Page 1 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Networks, Inc. www.spinnakernet.com 301."— Presentation transcript:

1 Page 1 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Networks, Inc. www.spinnakernet.com 301 Alpha Drive Pittsburgh, PA 15238 (412) 968-SPIN

2 Page 2 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE ”Everything you know is wrong” – … at least eventually – space requirements change – “class of service” changes – desired location changes Storage Admin’s Problem

3 Page 3 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE System scaling – add resources easily – without client-visible changes Online reconfiguration – no file name or mount changes – no disruption to concurrent accesses System performance Solution

4 Page 4 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Design Cluster servers for scaling –using IP (Gigabit Ethernet) for cluster links –separate physical from virtual resources –directory trees from their disk allocation –IP addresses from their network cards –we can add resources without changing client’s view of system

5 Page 5 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Design Within each server –storage pools –aggregate all storage with single service class –e.g. all RAID 1, RAID 5, extra fast storage –think “virtual partition” or “logical volume”

6 Page 6 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Architecture Create virtual file systems (VFSes) –a VFS is a tree with a root dir and subdirs –many VFSes can share a storage pool –VFS allocation changes dynamically with usage –without administrative intervention –can manage limits via quotas –Similar in concept to –AFS volume / DFS fileset

7 Page 7 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Architecture A adam ant B Bach Bobs Eng net disk Spin Depts Users ABEng Storage Pool

8 Page 8 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Architecture Create global “export” name space –choose a root VFS –mount other VFS, forming a tree –by creating mount point files within VFSes –export tree spans multiple servers in cluster –VFSes can be located anywhere in the cluster –export tree can be accessed from any server –different parts of tree can have different CoS

9 Page 9 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Global Naming and VFSes A adam ant Eng net disk Spin Depts Users A B Eng Users A B Eng Spin Depts Users A adam ant Eng net disk Storage Pool B BachBobs B BachBobs

10 Page 10 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Clustered Operation Each client connects to any server –requests are “switched” over cluster net –from incoming server –to server with desired data –based on –desired data –proximity to data (for mirrored data)

11 Page 11 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Cluster Organization

12 Page 12 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Client Access Gigabit Ethernet Fibre Channel Client Access Gigabit Ethernet Fibre Channel Disk Process Caching Locking X Gigabit Ethernet Switch Network Process TCP termination VLDB lookup NFS server over SpinFS SpinFS Protocol Network Process TCP termination VLDB lookup NFS server over SpinFS Disk Process Caching Locking Server/Network Implementation

13 Page 13 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE At enterprise scale, security is critical – don’t have departmental implicit “trust” Kerberos V5 support – For NFS clients – Groups from NIS – For CIFS using Active Directory Security

14 Page 14 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE A virtual server consists of – a global export name space (VFSes) – a set of IP addresses that can access it Benefits – additional security fire wall – a user guessing file IDs limited to that VS – rebalance users among NIC cards – move virtual IP addresses around dynamically Virtual Servers

15 Page 15 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE 94 MB/sec read – single stream read, 9K MTU 99 MB/sec write – single stream write, 9K MTU All files much larger than cache – real I/O scheduling was occurring Performance – single stream

16 Page 16 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Benefits Scale single export tree to high capacity –both in terms of gigabytes –and ops/second Keep server utilization high –create VFSes wherever space exists –independent of where data located in name space Use expensive class of storage –only when needed –anywhere in the global name space

17 Page 17 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Benefits Use third-party or SAN storage –Spinnaker sells storage –but will support LSI storage, others Kerberos and virtual servers –independent security mechanisms –cryptographic authentication and –IP address-based security as well

18 Page 18 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Free data from its physical constraints – data can move anywhere desired within a cluster VFS move – move data between servers online VFS mirroring – Mirror snapshots between servers High availability configuration – multiple heads supporting shared disks Near Term Roadmap

19 Page 19 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE VFS Movement VFSes move between servers –balance server cycle or disk space usage –allows servers to be easily decommissioned Move performed online –NFS and CIFS lock/open state preserved –Clients see no changes at all

20 Page 20 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE VFS Move

21 Page 21 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE VFS Mirror Multiple identical copies of VFS –version number based –provides efficient update after mirror broken –thousands of snapshots possible –similar to AFS replication or NetApp’s SnapMirror

22 Page 22 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Failover Pools Failover based upon storage pools –upon server failure, peer takes over pool –each pool can failover to different server –don’t need 100% extra capacity for failover

23 Page 23 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Failover Configuration SpinServer 1 2 P1 SpinServer 3 P4 P2 P3

24 Page 24 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Additional Benefits Higher system utilization –by moving data to under-utilized servers Decommission old systems –by moving storage and IP addresses away –without impacting users Change storage classes dynamically –move data to cheaper storage pools when possible Inexpensive redundant systems –don’t need 100% spare capacity

25 Page 25 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Caching – helps in MAN / WAN environments – provide high read bandwidth to single file Fibrechannel as access protocol – simple, well-understood client protocol stack NFS V4 Extended Roadmap

26 Page 26 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Summary Spinnaker’s view of NAS storage –network of storage servers –accessible from any point –with data flowing throughout system –with mirrors and caches as desired –optimizing various changing constraints –transparently to users

27 Page 27 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Thank You Mike Kazar CTO

28 Page 28 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Design Rationale Why integrate move with server? –VFS move must move open/lock state –Move must integrate with snapshot –Final transition requires careful locking at source and destination servers

29 Page 29 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Design Rationale Why not stripe VFSes across servers? –Distributed locking is very complex –and very hard to make fast –enterprise loads have poor server locality –as opposed to supercomputer large file patterns –Failure isolation –limit impact of serious crashes –partial restores difficult on stripe loss

30 Page 30 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Design Rationale VFSes vs. many small partitions –can overbook disk utilization –if 5% of users need 2X storage in 24 hours –can double everyone’s storage, or –can pool 100 users in an SP with 5% free


Download ppt "Page 1 of 30 NFS Industry Conference October 22-23, 2002 NFSNFS INDUSTRYINDUSTRY CONFERENCECONFERENCE Spinnaker Networks, Inc. www.spinnakernet.com 301."

Similar presentations


Ads by Google