Presentation is loading. Please wait.

Presentation is loading. Please wait.

Planning for High Availability Lesson 7. Backing Up Data The simplest and most common type of data availability mechanism is the disk backup, that is,

Similar presentations


Presentation on theme: "Planning for High Availability Lesson 7. Backing Up Data The simplest and most common type of data availability mechanism is the disk backup, that is,"— Presentation transcript:

1 Planning for High Availability Lesson 7

2 Backing Up Data The simplest and most common type of data availability mechanism is the disk backup, that is, a copy of a disk’s data stored on another medium. The traditional medium for network backups is magnetic tape, although other options are now becoming more prevalent, including online backups.

3 Shadow Copies Backups are primarily designed to protect against major losses, such as drive failures, computer thefts, and natural disasters. The loss of individual data files is a fairly common occurrence on most networks, typically due to accidental deletion or user mishandling. For backup administrators, the need to locate, mount, and search backup media just to restore a single file can be a regular annoyance. Windows Server 2008 includes a feature called Shadow Copies that can make recent versions of data files highly available to end-users.

4 Shadow Copies Shadow Copies is a mechanism that automatically retains copies of files on a server volume in multiple versions from specific points in time. When users accidentally overwrite or delete files, they can access the shadow copies to restore earlier versions. This feature is specifically designed to prevent administrators from having to load backup media to restore individual files for users. Shadow Copies is a file-based fault tolerance mechanism that does not provide protection against disk failures, but it does protect against the minor disasters that inconvenience users and administrators on a regular basis.

5 The Shadow Copies Dialog Box

6 Clients for Shadow Copy Once the server begins creating shadow copies, users can open previous versions of files on the selected volumes, either to restore those that they have accidentally deleted or overwritten, or to compare multiple versions of files as they work. To access the shadow copies stored on a server, a computer must be running the Previous Versions Client. This client is included with Windows Vista, Windows XP Service Pack 2, Windows Server 2008, and Windows Server 2003. For pre-SP2 Windows XP and Windows 2000 computers, you can download the client from the Microsoft Download Center at www.microsoft.com/downloads.

7 The Previous Versions Tab of a File’s Properties Sheet

8 Offline Files Offline Files works by copying server-based folders that users select for offline use to a workstation’s local drive. The users then work with the copies, which remain accessible whether the workstation can access the server or not. No matter what the cause, be it a drive malfunction, a server failure, or a network outage, the users can continue to access the offline files without interruption.

9 Offline Files When the workstation is able to reconnect to he server drive, a synchronization procedure. replicates the files between server and workstation in whichever direction is necessary. If the user on the workstation has modified the file, the system overwrites the server copy with the workstation copy. If another user has modified the copy of the file on the server, the workstation updates its local copy. If there is a version conflict, such as when users have modified both copies of a file, the system prompts the user to specify which copy to retain.

10 Offline Files To use Offline Files, the user of the client computer must first activate the feature, using one of the following procedures: – Windows XP and Windows Server 2003 — Open the Folder Options control panel, click the Offline Files tab, and select the Enable Offline Files checkbox. – Windows Vista and Windows Server 2008 — Open the Offline Files control panel and, on the General tab, click Enable Offline Files.

11 Disk Redundancy Disk redundancy is the most common type of high availability technology currently in use. Even organizations with small servers and modest budgets can benefit from redundant disks, installing two or more physical disk drives in a server and using the disk mirroring and RAID-1 and RAID-5 capabilities are built into Windows Server 2008. For larger servers, external disk arrays and dedicated RAID hardware products can provide more scalability, better performance, and a greater degree of availability.

12 Data High Availability Solution How much data do you have to protect? How critical is the data is to the operation of your enterprise? How long an outage can your organization comfortably endure? How much you can afford to spend?

13 Data High Availability Solution Remember that none of these high availability mechanisms are intended to be replacements for regular system backups. For document files that are less than critical, or files that see only occasional use, it might be more economical to keep some spare hard drives on hand and rely on your backups. If a failure occurs, you can replace the malfunctioning drive and restore it from your most recent backup, usually in a matter of hours. However, if access to server data is critical to your organization, the expense of a RAID solution might be seen as minimal, when compared to the lost revenue or even more serious consequences of a disk failure.

14 Planning for Application Availability High availability is not limited to data; applications, too, must be available for users to complete their work.

15 Application Resilience Refers to the ability of an application to maintain its own availability by detecting outdated, corrupted, or missing files and automatically correcting the problem.

16 Enhancing Application Availability Using Group Policy Administrators can use Group Policy to deploy application packages to computers and/or users on the network. When you assign a software package to a computer, the client installs the package automatically when the system boots. When you assign a package to a user, the client installs the application when the user logs on to the domain, or when the user invokes the software by double-clicking an associated document file.

17 Enhancing Application Availability Using Group Policy Both of these methods enforce a degree of application resilience, because even if the user manages to uninstall the application, the system will reinstall it during the next startup or domain logon. This is not a foolproof system, however. Group Policy will not recognize the absence of a single application file, as some other mechanisms do.

18 Windows Installer 4.0 The component in Windows Server 2008 that enables the system to install software packaged as files with a.msi extension. One of the advantages of deploying software in this manner is the built-in resiliency that Windows Installer provides to the applications.

19 Windows Installer 4.0 When you deploy a.msi package, either manually or using an automated solution, such as Group Policy or System Center Configuration Manager 2007, Windows Installer creates special shortcuts and file associations that function as entry points for the applications contained in the package. When a user invokes an application using one of these entry points, Windows Installer intercepts the call and verifies the application to make sure that its files are intact and all required updates are applied before executing it.

20 Server Clustering Server clustering can provide two forms of high availability on an enterprise network. In addition to providing fault tolerance in the event of a server failure, it can provide network load balancing for busy applications.

21 Failover Cluster Servers themselves can suffer failures that render them unavailable. Hard disks are not the only computer components that can fail, and one way of keeping servers available is to equip them with redundant components other than hard drives. The ultimate in fault tolerance, however, is to have entire servers that are redundant so that if anything goes wrong with one computer, another one can take its place almost immediately. In Windows Server 2008, this is known as a failover cluster.

22 Failover Cluster Requirements Duplicate servers. Shared storage. Redundant network connections.

23 Failover Cluster Requirements Duplicate operating system. Same applications. Same updates. Same Active Directory domain.

24 Failover Cluster Configuration The Failover Cluster Management console is included as a feature with Windows Server 2008. You must install it using the Add Features Wizard in Server Manager. Afterward, you can start to create a cluster by validating your hardware configuration. The Validate a Configuration Wizard performs an extensive battery of tests on the computers you select, enumerating their hardware and software resources and checking their configuration settings. – If any elements required for a cluster are incorrect or missing, the wizard lists them in a report.

25 The Failover Cluster Management Console

26 Creating a Failover Cluster After you validate your cluster configuration and correct any problems, you can create the cluster. A failover cluster is a logical entity that exists on the network, with its own name and IP address, just like a physical computer.

27 Network Load Balancing (NLB) Another type of clustering, useful when a Web server or other application becomes overwhelmed by a large volume of users, is network load balancing (NLB), in which you deploy multiple identical servers, also known as a server farm, and distribute the user traffic evenly among them.

28 Creating an NLB Cluster To create and manage NLB clusters on a Windows Server 2008 computer, you must first install the Network Load Balancing feature using Server Manager. This feature also includes the Network Load Balancing Manager console. Once you create the NLB cluster itself, you can add servers to and remove them from the cluster as needed.

29 Creating an NLB Cluster The process of implementing an NLB cluster consists of the following tasks: – Creating the cluster. – Adding servers to the cluster. – Specifying a name and IP address for the cluster. – Creating port rules that specify which types of traffic the cluster should balance among the cluster servers.

30 Network Load Balancing Manager Console

31 The Network Load Balancing Manager Console with Active Cluster

32 Heartbeats The servers in an NLB cluster continually exchange status messages with each other, known as heartbeats. – The heartbeats enable the cluster to check the availability of each server.

33 Convergence When a server fails to generate five consecutive heartbeats, the cluster initiates a process called convergence, which stops it from sending clients to the missing server. When the offending server is operational again, the cluster detects the resumed heartbeats and again performs a convergence, this time to add the server back into the cluster. These convergence processes are entirely automatic, so administrators can take a server offline at any time, for maintenance or repair, without disrupting the functionality of the cluster.

34 Load Balancing Terminal Servers Windows Server 2008 supports the use of network load balancing for terminal servers in a slightly different manner. For any organization with more than a few Terminal Services clients, multiple terminal servers are required. Network load balancing can ensure that the client sessions are distributed evenly among the servers.

35 TS Session Broker One problem inherent in the load balancing of terminal servers is that a client can disconnect from a session (without terminating it) and be assigned to a different terminal server when he or she attempts to reconnect later. To address this problem, the Terminal Services role includes the TS Session Broker role service, which maintains a database of client sessions and enables a disconnected client to reconnect to the same terminal server.

36 Load Balancing Terminal Servers The process of deploying Terminal Services with network load balancing consists of two parts: – Creating a terminal server farm. – Creating a network load balancing cluster.

37 The Terminal Services Configuration Console

38 Group Policy Settings for TS Session Broker

39 Using DNS Round Robin While TS Session Broker is an effective method for keeping the sessions balanced among the terminal servers, it does nothing to control which terminal server receives the initial connection requests from clients on the network. To balance the initial connection traffic amongst the terminal servers, you can use an NLB cluster, as described earlier in this lesson, or you can use another, simpler load balancing technique called DNS Round Robin.

40 The DNS Manager Console

41 Summary In computer networking, high availability refers to technologies that enable users to continue accessing a resource despite the occurrence of a disastrous hardware or software failure. Shadow Copies is a mechanism that automatically retains copies of files on a server volume in multiple versions from specific points in time. When users accidentally overwrite or delete files, they can access the shadow copies to restore earlier versions.

42 Summary Offline Files works by copying server-based folders that users select for offline use to a workstation’s local drive. The users then work with the copies, which remain accessible whether the workstation can access the server or not.

43 Summary When you plan for high availability, you must balance three factors: fault tolerance, performance, and expense. The more fault tolerance you require for your data, the more you must spend to achieve it, and the more likely you are to suffer degraded performance as a result of it.

44 Summary Disk mirroring is the simplest form of disk redundancy and typically does not have a negative effect on performance as long as you use a disk technology, such as SCSI (Small Computer System Interface) or serial ATA (SATA), that enables the computer to write to both disks at the same time.

45 Summary Parity-based RAID is the most commonly used high-availability solution for data storage, primarily because it is far more scalable than disk mirroring and enables you to realize more storage space from your hard disks. One way of protecting workstation applications and ensuring their continued availability is to run them using Terminal Services.

46 Summary Windows Installer 4.0 is the component in Windows Server 2008 that enables the system to install software packaged as files with a.msi extension. One of the advantages of deploying software in this manner is the built-in resiliency that Windows Installer provides to the applications.

47 Summary A failover cluster is a collection of two or more servers that perform the same role or run the same application and appear on the network as a single entity.

48 Summary The NLB cluster itself, like a failover cluster, is a logical entity with its own name and IP address. Clients connect to the cluster rather than to the individual computers, and the cluster distributes the incoming requests evenly among its component servers.

49 Summary The Terminal Services role includes the TS Session Broker role service, which maintains a database of client sessions and enables a disconnected client to reconnect to the same terminal server.

50 Summary In the DNS Round Robin technique, you create multiple resource records using the same name, with a different server IP address in each record. When clients attempt to resolve the name, the DNS server supplies them with each of the IP addresses in turn. As a result, the clients are evenly distributed among the servers.


Download ppt "Planning for High Availability Lesson 7. Backing Up Data The simplest and most common type of data availability mechanism is the disk backup, that is,"

Similar presentations


Ads by Google