Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Windows 2008 Failover Clustering Witness/Quorum Models.

Similar presentations


Presentation on theme: "1 Windows 2008 Failover Clustering Witness/Quorum Models."— Presentation transcript:

1 1 Windows 2008 Failover Clustering Witness/Quorum Models

2 2 Node Majority* Can sustain failures of half the nodes (rounding up) minus one –Ex: a seven node cluster can sustain three node failures. No concurrently accessed disk required –Disks/LUNs in Resource Groups are not concurrently accessed Cluster stops if majority of nodes fail –Requires at least 3 nodes in cluster –Inappropriate for automatic failover in geographically distributed clusters Still deployed in environments that want humans to decide service location Recommended for clusters with odd number of nodes * Formerly Majority Node Set Cluster Status

3 3 Node Majority with Witness Disk* Can sustain failures of half the nodes (rounding up) if the disk witness remains online –Ex: a six node cluster in which the disk witness is online could sustain three node failures Can sustain failures of half the nodes (rounding up) minus one if the disk witness fails –Ex: a six node cluster with a failed disk witness could sustain two (3-1=2) node failures. Witness disk is concurrently accessed by all nodes –Acts as tiebreaker Witness disk can fail without affecting cluster operations –Usually used in 2-node clusters, or some geographically dispersed clusters Can work with SRDF/CE –64 clusters/VMAX pair limit Can work with VPLEX Does not work with: –RecoverPoint –MirrorView Recommended for clusters with even number of nodes * Formerly quorum disk Cluster Status

4 4 Witness Disk Can sustain failures of all nodes except one –Loss of witness disk stops the cluster Original Legacy cluster model for Windows until the introduction of Majority Node Set Witness disk is the only voter in the cluster –Failure of the witness leads to failure of the cluster Not generally recommended Cluster Status

5 5 Node Majority with File Share Witness (FSW) Can sustain failures of half the nodes (rounding up) if the FSW remains online –Ex: a six node cluster in which the disk witness is online could sustain three node failures Can sustain failures of half the nodes (rounding up) minus one if the FSW fails –Ex: a six node cluster with a failed disk witness could sustain two (3-1=2) node failures. Any CIFS share will work FSW is not a member of the cluster One host can serve multiple clusters as a witness FSW placement is important –Third failure domain –Or FSW itself can be made to automatically fail over Timing issues can be a challenge Works with no node limitations on: –SRDF/CE –MV/CE –RP/CE –VPLEX Recommended for most Geographically Distributed Clusters Cluster Status

6 6 Why is Geographically Distributed Clustering a special case? A two site configuration will always include a failure scenario that will result in majority loss This is often desired behavior –Sometimes desirable to have humans control the failover between sites –Failover is automated, but not automatic –Simple to restart the services on surviving nodes (force quorum) net start clussvc /fq If automatic failover between sites is required, deploy a FSW in a separate failure domain (third site)

7 7 Things to note Successful failover requires all disks in the resource group (RG) be available to the production node, including disks, requiring: –Replication between sites –A method to surface the replicated copies to the nodes in the DR site (Cluster Enabler), OR –A virtualization technique whereby the replicated LUNs are always available to the nodes in the DR site (VPLEX)

8 8 Multi-site configurations

9 9 Quorum Recommendations http://technet.microsoft.com/en-us/library/cc770830(WS.10).aspx#BKMK_multi_site

10 10 FSW failure scenarios 3-site configuration – odd # voters Cluster Status

11 11 FSW failure scenarios Even # voters Cluster Status

12 12 FSW Failure Scenarios 2-site configuration – even # voters Cluster Status

13 13 FSW Failure Scenarios 2-site configuration – odd # voters Cluster Status

14 14 Node Weights Nodes can be altered to have no vote – cluster. node /prop NodeWeight=0 Useful when you have an unbalanced configuration Hotfix required –http://support.microsoft.com/kb/2494036http://support.microsoft.com/kb/2494036

15 15 Dealing with loss of quorum Its an outage that requires manual intervention to recover – net start clussvc /fq –Cluster returns unforced when a majority of nodes come back online Does not alter RPO of user data Rolling failures may result in reversion of cluster configuration –FSW does not store cluster database (same behavior as disk witness)

16 16 FSW Considerations FSW can be SMB 1.0 –No need to be same OS as member nodes –Same domain, same forest –Cannot be a member of the cluster –Can be hosted on NAS (VNX) 5 MB of free space If Windows, server should be dedicated to FSW –1 server can be witness to multiple clusters –Beware of dependencies Administrator must have full control share and NTFS perms No DFS

17 17 Changing the quorum configuration Process for FSW No downtime requirement –As long as there are a majority of nodes available Account requirements –Administrators group on each node member –Domain user account Create FSW Start Failover Cluster Manager


Download ppt "1 Windows 2008 Failover Clustering Witness/Quorum Models."

Similar presentations


Ads by Google