Download presentation
Presentation is loading. Please wait.
1
Storage Foundation Cluster File System
2
How do you want to manage your servers?
COMPLEX SIMPLE Storage Network Cluster File System Storage Network Cluster Volume Manager 2 – 2005 Symantec Corporation, All Rights Reserved
3
Comprehensive Cluster Management
Concurrent read/write access File System Management Dynamic file system resize Large file system support Journaling File System POSIX compliant Integrated Cluster Management Cluster File System Cluster Volume Manager Veritas Cluster Server Heterogeneous Linux, Solaris, HP, AIX Storage Foundation Cluster File System Cluster Volume Manager Cluster File System VCS Storage Foundation Cluster File System HA To start off….what is a cluster file system. The simple answer is - A cluster file system allows direct data transfers between computers (clients) and the storage device Enables clustered servers to mount and use a file system simultaneously with single-server semantics as if all applications using the file system were running on the same server However, with VERITAS Cluster File System, we provide much more than this. SF Clustered File System /HA is a storage management solution that includes VERITAS Cluster File System (VxCFS), VERITAS Cluster Volume Manager (VxCVM), & VERITAS Cluster Server 3 – 2005 Symantec Corporation, All Rights Reserved
4
Cluster File System Architecture
Primary Server Secondary Server Extends Local File System Functionality Asymmetric topology Servers access data directly Primary Server manages metadata updates Automated Primary Server failover Fast application failover Storage Network If the primary servers fails, a secondary server automatically takes over the metadata management responsibility. Advantages Less metadata locking traffic Lower complexity Metadata Data 4 – 2005 Symantec Corporation, All Rights Reserved
5
Technical Overview CVM CFS GAB LLT LLT GAB CVM CFS
Cluster File System (CFS) Global Lock Manager. Cache coherency Distributed lock management Cluster Volume Manager (CVM) Provides shared access to volumes from multiple nodes Global Atomic Broadcast (GAB) Cluster Membership and messaging Low Latency Transportation (LLT) Low latency inter-node communication kmsg vxconfig GLM Qlog GAB LLT Two node overview of SFCFS communication protocols. GLM is a distributed lock manager that maintain UNIX single-host file system semantics in clusters. Cache coherency means all updates are atomic across the cluster. All nodes within the cluster see all updates, even before the updates have reached the physical disk. LLT GAB kmsg vxconfig GLM Qlog CVM CFS 5 – 2005 Symantec Corporation, All Rights Reserved
6
Why Veritas Cluster File System?
Prevent Split Brain with I/O Fencing Avoid Exclusive Locks with Range Locking Utilize multiple HBA’s with Cluster DMP Manage Application Failover with Cluster Server CFS give you the ability to create a single namespace file system over multiple machines. Making it easier to manage the files system. 6 – 2005 Symantec Corporation, All Rights Reserved
7
Understanding Split Brain
Split Brain can cause data corruption One node must survive, the other must be shutdown VERITAS I/O Fencing handles all scenarios If you do not have split-brain protection, you can corrupt your data. It take a lot of effort to restore if you do encounter data corruption…you need to go back to a previous copy…for most customer, that means they would have to restore from tape. Painful! If there is a network failure, a system failure, they look the same to the other server… The heatbeats stop coming back. Also, if one server hangs, but did not fail, there might not be heartbeat either, so this looks like a system failure to the first server…split brain can happen here. The way to protect against split brain is to force one node (or server) to shut-down, thus no more writes can come from the failed node. VERITAS has Strongest protection available against logical corruption from split-brain. Network failure or system failure? System failure or system hang? 7 – 2005 Symantec Corporation, All Rights Reserved
8
I/O Fencing: How Does it Work
1 Eject departed node from coordinator disks 2 Ejected node can not write to disks 4 This slide explains how VERITAS I/O fencing works. Step 1 – if the interconnect fails, the first thing that happens is that the two nodes will “race” to grab control of the three coordinator disks only one will get control of the majority of the disk Step 2 - the winner node now has control of the coordinator disks Step 3 – now the winner node can go down and eject the loser node from accessing the data disks Step 4 – the loser node can no longer have access to the data disks – which will cause an I/o error for the loser node, which will eventually shut down the node. With the scheme, split-brain has been avoided! Eject departed node from data disks 3 8 – 2005 Symantec Corporation, All Rights Reserved
9
Exclusive locking Node 1 – Writer with an exclusive lock
Node 2 – Writer waiting for lock to be released File foo.bar Appending writes, e.g. a log file The benefits of Range Locking: Improves performance when multiple nodes read/write to the same file. Applications: For HPC, scientific computing, apache web server; media serving. Without Range Locking Normally, when a node has a lock on a file, it has exclusive lock, which means no other nodes can have access (read or write) to that file until the lock is released. For example, when node 2 is doing a write to the file foo.bar, node 1 can’t access the file foo.bar. With each appending write, the file held by node 2 just get bigger and bigger, thus the lock gets bigger. Lock held by node 1, other nodes do not have access until the lock is released 9 – 2005 Symantec Corporation, All Rights Reserved
10
Range Locking Node 1 - Writer Node 2 - Writer
Appending writes, e.g. data ingest File foo.bar With this example, nodes 2 is only updating the log file and does not need to lock the whole file. With the range locking feature, only certain address space is locked. Now, the different nodes within your cluster can have parallel access to the same file at the same time, improving your performance dramatically. Available for read or write by any nodes Lock held by node 1, other nodes do not have access 10 – 2005 Symantec Corporation, All Rights Reserved
11
Cluster Dynamic Multi-Pathing
Dynamic Multi-Pathing (DMP) provides multiple paths from server to storage Path load balancing improves I/O bandwidth, spreading I/O across connections Path failover increases application availability Alerts cluster of failed path Storage Network 11 – 2005 Symantec Corporation, All Rights Reserved
12
Cluster Volume Manager
Application Failover VERITAS Cluster Server used for application failover Full range of Enterprise Agents available Faster failover Oracle Oracle Apache VCS VCS VCS Cluster File System Cluster Volume Manager Storage Network 12 – 2005 Symantec Corporation, All Rights Reserved
13
Advanced File System Features
Benefit Feature Easily migrate data between Operating Systems without backup/restore Portable Data Containers (PDC) allows data migration between different Operating Systems Reduce storage hardware costs Quality of Storage Services (QoSS) assign files to a specific storage tier based on data classification Enables easy, consistent off-host processing for batch runs or backup FlashSnap provides manageable point-in-time copies Allows automated, policy based, storage environment configuration Intelligent Storage Provisioning (ISP) 13 – 2005 Symantec Corporation, All Rights Reserved
14
Use Cases Oracle RAC Scale-out NFS, FTP
Middleware/Message passing/Workflow applications Web Servers (Single content image) Databases - single instance, faster failover Data Capture / Data Warehouse Billing Finance Telemetry Government Scale-Out NFS / Parallel NFS – A Cluster File System file system is mounted by various hosts in the cluster and each host in the cluster can further act as NFS server by exporting the file system to various NFS clients. In the event of a server failure the virtual I/P of the server can failover to any of the existing hosts of the cluster and NFS clients can remount the file system. Further applications clients can be load balanced and routed to appropriate NFS clients via Domain Name System (DNS). Middleware/Messages passing – Tibco EMS, BEA Weblogic, Home grown messaging applicaitons. 14 – 2005 Symantec Corporation, All Rights Reserved
15
Summary Better Scalability Quick and Non-Disruptive Failover
Concurrent file access Provision of a data virtualization layer Quick and Non-Disruptive Failover Really fast application failover Automated primary server failover Ease of Management Storage for all Servers administered as a single entity Single file system schema Flexible Deployment of Applications Manage all instances of the app simultaneously 15 – 2005 Symantec Corporation, All Rights Reserved
16
Thank You!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.