Module 12: Designing High Availability in Windows Server ® 2008.

Slides:



Advertisements
Similar presentations
Module 13: Implementing ISA Server 2004 Enterprise Edition: Site-to-Site VPN Scenario.
Advertisements

What’s New: Windows Server 2012 R2 Tim Vander Kooi Systems Architect
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
High Availability 24 hours a day, 7 days a week, 365 days a year… Vik Nagjee Product Manager, Core Technologies InterSystems Corporation.
Module 12: Microsoft Windows 2000 Clustering. Overview Application of Clustering Technology Testing Tools.
1 Week #1 Objectives Review clients, servers, and Windows network models Differentiate among the editions of Server 2008 Discuss the new Windows Server.
1 Week #1 Objectives Review clients, servers, and Windows network models Differentiate among the editions of Server 2008 Discuss the new Windows Server.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Module 8: Concepts of a Network Load Balancing Cluster
Lesson 1: Configuring Network Load Balancing
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
How to Cluster both Servers and Storage W. Curtis Preston President The Storage Group.
Implementing Failover Clustering with Hyper-V
High Availability Module 12.
11 SERVER CLUSTERING Chapter 6. Chapter 6: SERVER CLUSTERING2 OVERVIEW  List the types of server clusters.  Determine which type of cluster to use for.
Implementing High Availability
(ITI310) By Eng. BASSEM ALSAID SESSIONS 8: Network Load Balancing (NLB)
MCTS Guide to Microsoft Windows Server 2008 Applications Infrastructure Configuration (Exam # ) Chapter Ten Configuring Windows Server 2008 for High.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
Ronen Gabbay Microsoft Regional Director Yside / Hi-Tech College
Elad Hayun Agenda What's New in Hyper-V 2012 Storage Improvements Networking Improvements VM Mobility Improvements.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Module 13: Configuring Availability of Network Resources and Content.
Module 12: Designing an AD LDS Implementation. AD LDS Usage AD LDS is most commonly used as a solution to the following requirements: Providing an LDAP-based.
Module 13: Network Load Balancing Fundamentals. Server Availability and Scalability Overview Windows Network Load Balancing Configuring Windows Network.
Implementing Multi-Site Clusters April Trần Văn Huệ Nhất Nghệ CPLS.
1 © 2006 SolidWorks Corp. Confidential. Clustering  SQL can be used in “Cluster Pack” –A pack is a group of servers that operate together and share partitioned.
Windows 2000 Advanced Server and Clustering Prepared by: Tetsu Nagayama Russ Smith Dale Pena.
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Module 19 Managing Multiple Servers. Module Overview Working with Multiple Servers Virtualizing SQL Server Deploying and Upgrading Data-Tier Applications.
Module 3: Designing IP Addressing. Module Overview Designing an IPv4 Addressing Scheme Designing DHCP Implementation Designing DHCP Configuration Options.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
Module 10: Maintaining High-Availability. Overview Introduction to Availability Increasing Availability Using Failover Clustering Standby Servers and.
Module 13 Implementing Business Continuity. Module Overview Protecting and Recovering Content Working with Backup and Restore for Disaster Recovery Implementing.
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
Module 3 Planning and Deploying Mailbox Services.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
VMware vSphere Configuration and Management v6
Module 1 Introduction to Designing a Microsoft® Exchange Server 2010 Deployment.
Clustering Servers Chapter Seven. Exam Objectives in this Chapter:  Plan services for high availability Plan a high availability solution that uses clustering.
© 2002 Global Knowledge Network, Inc. All rights reserved. Windows Server 2003 MCSA and MCSE Upgrade Clustering Servers.
70-412: Configuring Advanced Windows Server 2012 services
NETWORK LOAD BALANCING (NLB) Microsoft Windows Server 2003 By Mohammad Alsawwaf ITEC452 Supervised By: Dr. Lee RADFORD UNIVERSITY.
CHAPTER 7 CLUSTERING SERVERS. CLUSTERING TYPES There are 2 types of clustering ; Server clusters Network Load Balancing (NLB) The difference between the.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
Virtual Machine Movement and Hyper-V Replica
Windows Server 2008 R2 Failover Clustering and Network Load Balancing October 25 th 2009.
Microsoft Virtual Academy Module 9 Configuring and Managing the VMM Library.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
Lab A: Planning an Installation
Scaling Network Load Balancing Clusters
Bentley Systems, Incorporated
(ITI310) SESSIONS 8: Network Load Balancing (NLB)
High Availability 24 hours a day, 7 days a week, 365 days a year…
Network Load Balancing
Module 8: Concepts of a Network Load Balancing Cluster
Cluster Communications
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Unit OS10: Fault Tolerance
VceTests VCE Test Dumps
Introduction to Networks
Module – 7 network-attached storage (NAS)
Chapter 16: Distributed System Structures
Planning High Availability and Disaster Recovery
Microsoft Virtual Academy
Designing Database Solutions for SQL Server
Presentation transcript:

Module 12: Designing High Availability in Windows Server ® 2008

Module Overview Overview of High Availability Designing Network Load Balancing for High Availability Designing Failover Clustering for High Availability Geographically Dispersed Failover Clusters

Service Level Agreements An SLA includes: Requirements for availability Recovery times and processes Penalties for non-compliance Escalation procedures SLAs can be formal or informal

High Availability Options in Windows Server 2008 OptionDescription Network load balancing Distributes application requests among multiple nodes Failover clustering Migrates services from one server to another, after server failure Virtual machine migration Moves a virtual machine to a new host without shutting it down Quick migration requires a virtual machine to be paused

Lesson 2: Designing Network Load Balancing for High Availability Overview of Network Load Balancing Considerations for Storing Application Data for NLB Host Priority and Affinity Selecting a Network Communication Method for NLB

Overview of Network Load Balancing NLB is a fully distributed, software-based solution for load balancing that does not require any specialized hardware CharacteristicDescription NLB scalability Scale an NLB cluster by adding servers NLB availability Server failure is detected by other servers in the cluster Application failure is not detected Load is automatically distributed among the remaining servers

Considerations for Storing Application Data for NLB All servers must have the same data Data can be stored in a central location Data can be synchronized between servers The suitability of an application for NLB depends on how data is stored

Host Priority and Affinity FeatureDescription Affinity Determines which server receives subsequent incoming requests from a specific host Useful for applications that maintain user state information Host Priority Is used for failover, rather than load balancing Useful for applications that share a data store across servers

Selecting a Network Communication Method for NLB Unicast: One NIC is dedicated to NLB communication Requires two NICs Allows segmentation of NLB communication Multicast: Multicast is used for NLB communication Requires only one NIC All communication happens on a single network

Lesson 3: Designing Failover Clustering for High Availability Overview of Failover Clustering Failover Clustering Scenarios Shared Storage for Failover Clustering Guidelines for Designing Hardware for Failover Clustering Guidelines for Failover Clustering Capacity Planning Quorum Configuration for Failover Clustering

Overview of Failover Clustering Failover clustering: Runs services on a virtual server Clients connect to services on the virtual server Virtual server can failover from one cluster node to another Clients are reconnected to services on the new node Clients experience a short disruption of service Requires shared storage

Failover Clustering Scenarios Use failover clustering when: High availability is required Scalability is not required Application is stateful Client automatically reconnects Application uses IP-based protocols

Shared Storage for Failover Clustering Shared serial attached SCSI (SAS) iSCSI Fibre Channel Failover clusters require shared storage to provide consistent data to a virtual server after failover

Guidelines for Designing Hardware for Failover Clustering Some guidelines for failover clustering hardware are: Use a 64-bit operating system and hardware to increase memory scalability Use multicore processors to increase scalability Use the Validation tool to verify correct configuration and to ensure support from Microsoft Use GPT disk partitioning to increase partition sizes up to 160 TB

Guidelines for Failover Clustering Capacity Planning Plan failover to spread load evenly to remaining nodes Ensure that nodes have sufficient capacity to support virtual servers that have failed over Use hardware with similar capacity for all nodes in a cluster Use standby servers to simplify capacity planning

Quorum Configuration for Failover Clustering Quorum configuration When to use Node Majority There are an odd number of nodes Node and Disk Majority There are an even number of nodes Node and File Share Majority A shared disk is not required with an even number of nodes No Majority: Disk Only Not recommended because disk is a single point of failure

Lesson 4: Geographically Dispersed Failover Clusters Overview of Geographically Dispersed Clusters Data Replication for Geographically Dispersed Clusters Quorum Configuration for Geographically Dispersed Clusters

Overview of Geographically Dispersed Clusters Geographically dispersed clusters: Are typically used as a disaster recovery hot site Have specialized concerns about data synchronization between locations Using “Or” networking allows clustering over two subnets Require specific hardware for support Require that careful consideration be given to the quorum configuration

Data Replication for Geographically Dispersed Clusters Asynchronous replication: A file change is complete in the first location, and then replicated to the second location Faster performance If disk operation order is preserved, then data is preserved Synchronous replication: A file change is not complete until replicated to both locations Ensures consistent data in both locations

Quorum Configuration for Geographically Dispersed Clusters When designing automatic failover for geographically dispersed clusters: Use Node Majority or Node Majority with File Share quorum Three locations must be used to allow automatic failover of a single virtual server All three locations must be linked directly to each other One location can be only a file-share witness