Presentation on theme: "Module 11: Windows 2000 Network Services Management."— Presentation transcript:
Module 11: Windows 2000 Network Services Management
Overview Defining Management Strategies Identifying Management Processes Generating Information on the Status of the Services Analyzing the Collected Data Selecting Response Strategies
An essential component of a Microsoft® Windows® 2000 networking services infrastructure design is the management of the network services. An effective management plan for network services can ensure that the functionality, security, availability, and performance of the network services, and the network, continue to meet the specifications of your infrastructure design.
At the end of this module, you will be able to: Define strategies for managing the network services. Identify the processes used to execute a management plan. Select the appropriate methods to generate information about the status of the services. Select the appropriate methods to analyze collected data. Select appropriate response strategies.
Defining Management Strategies Management Strategies Monitor All DNS Activity Responding to Service Variations Verifying Compliance with Design Anticipating Changes to a Design Management Plan for DNS Policies, Procedures, Processes
management plan for network services is generated from strategies and permits detection of, and response to, changes in the network services. Your management plan defines policies, procedures, and processes that permit you to respond to, verify, and anticipate variations in the service. The highest priority in your management plan must be to detect and respond to critical events such as service or network failures. At a lower priority, you must monitor design compliance and anticipate the need for design changes. The strategies defining your management plan can specify reactive, proactive, manual, or automatic responses to service variations.
The management plan for network services is only one component of a larger network management plan. The larger plan manages the network and applications that are supported in the organization. You must give consideration to how the services management plan integrates with any larger network management plan.
In this lesson you will learn that a services management plan includes strategies for: Responding to service variations as they occur. Verifying that current operations are compliant with the design specifications. Anticipating the need for changes to the network services design. Management strategies must include processes and procedures used to continuously acquire the current status, analyze the collected data, and specify appropriate responses.
Responding to Service Variations Services and Servers Unavailable Client Requests Not Resolved Threshold Values Exceeded Calculated Values Outside Specification
You must detect network service variations, such as the failure of a service, and respond appropriately to restore operation. Your strategies must define processes to respond to the service variations automatically, or provide notification to operations staff for manual responses.
Typically, immediate detection, notification, and responses are required when: Services or servers are unavailable. Client requests for services fail. Threshold values are exceeded. Calculated values are outside the specifications.
Wherever possible, an effective management plan defines processes to detect and respond to service variations before failure occurs. Immediate notifications of service variations are required when operations staff must make the response. If your strategies include processes to automate responses to service variations, these processes must also include notifications to operations staff of automated system responses that have occurred.
Verifying Compliance with the Design Manual Testing Scheduled Audits, Availability, and Redundancy Tests Monitoring Service Uptime Service Performance Service-to-Service Interaction
The design specifications for an existing network services infrastructure provide the baseline against which to test compliance. If the specifications are exceeded in operation, the services might no longer provide the required functionality, security, availability, or performance. The specifications in a conservative design are selected with sufficient tolerance so that exceeding them does not cause immediate failure. You can design your management plan to verify compliance with the design specifications either manually or automatically. To verify that services are operating within the required specifications, it may be necessary to analyze both threshold values and accumulated data.
Manual Testing You cannot verify some aspects of a design, such as the testing of server redundancy, by using automatically collected data. These design aspects require the definition of the appropriate manual processes and procedures to ensure that the services are compliant.
Scheduled Audits, Availability, and Redundancy Tests The security and access permissions for a service are often modified over time. You can conduct regular audits to ascertain compliance with security, and access design specifications. If the service infrastructure consists of multiple servers providing redundancy and load balancing, tests will confirm compliance with the design specifications. Your compliance testing procedures can specify that servers or services be stopped to test the response of either automated or manual reconfiguration procedures.
Monitoring Include monitoring processes in your management plan to measure service uptime, service performance, and service-to-service interaction. The operations staff can use these measurements to verify compliance with the design specifications.
Service uptime To assess the availability of a service, design your strategy to measure the uptime of both individual servers and services.
Service performance The performance of a server providing a service begins to degrade as the client query rate increases. Monitoring client-to-service interaction, and processor performance, gives an indication of when the specifications are being exceeded.
Service-to-service interaction You must monitor interaction between services, such as replication between multiple WINS servers, or DNS to WINS query traffic, to ensure compliance with the specifications. Your management plan must include analysis of replication schedules, replication traffic, and service interaction traffic.
Anticipating Service Infrastructure Design Changes Change Design? Collected Data Analysis Router
The resources required to support the services infrastructure, and the requirements of the infrastructure, can change over time. For example, a WINS database requires more disk resources as the database grows. In addition, day-to-day operations management, and automatic changes, can alter the services infrastructure enough to require design changes. Although most of these changes are minor, over time, the cumulative effect can be significant. Include in your management strategies processes to measure the change in resource needs for your services. The measurement processes require the accumulation of information about the consumption of resources over time. Operations staff uses this information to anticipate the need for changes to the network design.
For example, when you release a new application to users, it might increase the load on the DNS service. Monitoring the response of the DNS service will show a decrease in performance as the client load increases. Although the DNS service might currently comply with the design specifications, monitored data shows a trend indicating that redesign is necessary to support this new application. Note: If you must design your management plan to predict the need for future changes to the design specifications, you must include processes for trend analysis by using the monitored data.
Status of the Services Analysis of Relevant Data Response to Service Variations Identifying Management Processes Status Analysis Response
You must design processes in your management plan to provide feedback and control of the network services infrastructure. For example, you need immediate warning when a required service stops so that the appropriate action can be taken to restore operation.
Design the processes for your management plan to: Obtain the current status of the services or the services infrastructure. Analyze collected data to verify service operation and compliance with the design, and use trend analysis to predict when compliance will be compromised. Respond to service variations to bring services back into compliance. Note: Any plan to monitor and respond to service variations is often part of a larger management system, such as Microsoft Systems Management Server, or third-party management solutions.
The acquisition of status information, along with the analysis of data, can be automated or manual. Designing a Security Plan
Generating Information on the Status of the Services Data Collection Strategies Tools and Utilities Performance Logs and Alerts Discussion: Acquiring Data with Logs and Alerts SNMP Event Logs Scripting and Programming Solutions Windows Management Instrumentation
To obtain the operational status of a service requires information about individual service providers and network conditions, and verification that client requests receive appropriate responses. When the data available from a single source is not extensive enough to give a complete picture of service operation, you can use a combination of several tools and sources to derive the status.
To obtain the necessary information for assessing a service, you can use the following sources: On Data collection strategies Tools and utilities Performance Logs and Alerts Simple Network Management Protocol (SNMP) Event logs Scripting and programming solutions Windows Management Instrumentation (WMI)
Data Collection Strategies Data Collection Centralized Distributed Generated Events Performance Logs and Alerts Service monitors SNMP
The collection of status information for analysis is critical to the process of monitoring the network services. Monitoring individual services, and the overall network, is an intensive process that can generate large amounts of accumulated status information. Generating an event to signal a change in status can significantly reduce the amount of status information that is required. It might be required for the process to accumulate unprocessed information from which to derive the status of the services. This unprocessed information comes from logs, tools and utilities, or events generated by automated monitoring processes.
Data Collection In a distributed collection strategy, the data is accumulated at multiple points within the infrastructure. You can also distribute the analysis and responses, but it is likely that you will centralize the collected data to allow a single point of management, such as a help desk. If you must channel the status data to a central management point, you can accomplish this in one of two ways: In-band data collection Out-of-band data collection Select in-band data collection when the network infrastructure is failure-tolerant, or has redundant paths. Use an out-of-band strategy when the network infrastructure is not fault tolerant, and the network failures would prevent data collection.
Centralized Data Collection In a centralized monitoring strategy, the status data is accumulated and analyzed at a central location. This central location can be a management station or a central node within a larger management system; it is typically a host running a set of management tools and programs. Centralized data collection increases network usage, which can degrade network performance. In the case of a network failure, no status data will be available. If the centralized data collection strategy is designed to operate even when network and node failures exist, then you must plan to use out-of-band data collection. This means providing different paths for data collection. For example, you can use a series of dial- up modems, or Integrated Services Digital Network (ISDN) connections, that are not part of the normal data network for data collection.
Distributed Data Collection A distributed monitoring strategy accumulates the data on many nodes within the services infrastructure. This accumulation allows the data to be processed before being sent to a management node, thereby significantly reducing the amount of data that is processed at the management node. Collecting the status at distributed locations allows localized responses to failures. This can be important when the strategy must allow for the independent operation of locations when network failures occur.
Generated Events Event notification requires that the monitored service provide information about its current status, or that some external software is used to monitor the service for status changes. Active service monitoring can generate events, send event notification, and in some cases, automatically restart a service.
Performance Logs and Alerts events System Monitor allows events to be generated by running an application when set thresholds are exceeded. This allows status information to be written to a log file, thereby providing an event that is sent directly to the operations staff or to an intermediary monitoring application.
Service monitor events The service monitors available for use depend on which products are installed. Service recovery and monitoring is built into the Windows 2000 operating system and is provided in products such as Microsoft Exchange Server. On detecting a failure, service monitors restart the failed services, restart the server, or run a program to send notification of failure events.
SNMP events Adding SNMP to Transmission Control Protocol/Internet Protocol (TCP/IP) allows use of SNMP Management Information Base (MIB) definitions to assess the current service operation. When SNMP is installed on a Windows 2000-based computer, SNMP traps may be generated based on the events written to the Event logs and defined in the MIB for that particular service.
Tools and Utilities Status Information Tools and Utilities WAN Link Router
You can use command-line network tools and utilities to test the status of both the services and the network infrastructure. You can use the information collected by these tools and utilities to analyze service and network operation and variation. You can use the tools and utilities interactively or store their output in files for later analysis.
You can use the following tools interactively to provide status information:
Network Monitor. A tool used to monitor the network data stream for all of the information (called frames or packets) that is transferred over a network. The Network Monitor supplied with Windows 2000 captures data sent to and from the computer on which it is running. The version of Network Monitor available with Microsoft Systems Management Server can capture all network data.
Netdiag. A utility that performs a series of tests to isolate networking and connectivity problems; it is also used to determine the functional state of your network client. Netdiag does extensive testing of the computer on which it is run, including checking the availability of WINS and DNS. Netdiag is installed with the support tools, which are available in the \Support\Tools directory of your Windows 2000 CD.
Ping. A utility used to troubleshoot IP-level connectivity. Ping allows you to specify the size of packets to use (the default is 32 bytes), how many to send, whether to record the route used, what Time to Live (TTL) value to use, and whether to set the "don't fragment" flag. Ping provides a minimum average and maximum roundtrip time (RTT), which is useful to analyze where routing delays occur.
Tracert. A route-tracing utility that displays a list of nearside router interfaces from the routers along the path between a source host and a destination. Tracert uses the IP TTL field in Internet Control Message Protocol (ICMP) Echo Requests and ICMP Time Exceeded messages to determine the path from a source to a destination through an IP internetwork.
Pathping. A route-tracing tool that combines the features of Ping and Tracert with additional unique information. Over a period of time, Pathping sends packets to each router on the path to a final destination, and then computes results based on the packets returned from each hop. Pathping shows the degree of packet loss at any given router or link, so you can pinpoint which routers or links might be causing network problems.
Nslookup. A utility used for troubleshooting DNS problems, such as host name resolution failure. Nslookup displays a command prompt and shows the host name and IP address of the local DNS server. You can then perform interactive queries to test DNS name resolution.
Netstat. A utility used to display protocol statistics and current TCP/IP connections. You can display the connection status and throughput statistics for TCP/IP interfaces in the computer.
Nbtstat. A utility that displays protocol statistics and current TCP/IP connections that use NetBIOS over TCP/IP (NetBT). When a network is functioning normally, NetBT resolves NetBIOS names to IP addresses.
Performance Logs and Alerts Server Performance Network Performance Infrastructure Performance Centralized Collection Distributed Collection
System Monitor, which is found within the Performance Console, allows you to obtain real-time data and collect logs. The Performance Console also includes Performance Logs and Alerts to provide logging and notification of changes in a service. It does this by setting triggers on appropriate counters. You can automate the collection process by specifying a schedule.
System Monitor and Performance Logs and Alerts support a large number of objects, and can access counters covering many aspects of an object's operation. You can select the objects and counters to suit your particular infrastructure. DHCP, WINS, DNS, and RAS Objects exist to supply the status on these services.
System Monitor log files can be generated on individual servers, or the data can be obtained from multiple servers by a single instance of System Monitor, and written to a centralized log. To ensure the smallest file sizes, always log data by using the binary format.
You can select a strategy for data collection: Centralized, if the number of counters is low, the collection interval is long, or both. Distributed, if the number of counters is high, the collection interval is short, or both. Note: To assess the impact of distributed data collection on the network, log data to a file on one server, and calculate the number of bytes/second for the accumulated data. This value represents the number of bytes/second that traverse the network to a central instance of System Monitor.
Server Performance To derive overall usage and performance levels for servers, you must include strategies for collection of status data from all servers. Be aware that you must acquire and analyze many counters to assess in detail the operation of a single server running multiple services. The default System Overview log defines data collection on processor usage, current memory page activity, and disk queue length. This data provides a quick view of a server's resource usage level.
Network Performance If network traffic exceeds the local area network (LAN) capacity, performance will be degraded for all users and services on the network. You can use the Network Segment Broadcast counter found in Network Monitor to calculate the bandwidth used by broadcast traffic. Because each computer processes every broadcast, high broadcast levels can mean lower performance. Note: To monitor TCP/IP statistics on computers running Windows 2000, install the SNMP service. Performance Logs and Alerts access these TCP/IP statistics.
Services Infrastructure Performance You can include a range of System Monitor and Network Monitor counters collected from various points within the system. You can view the data as it is obtained, or log it to a file for future analysis. You can use the acquired data to derive information about the performance of the overall services infrastructure.
Discussion: Acquiring Data with Performance Logs and Alerts
SNMP SNMP MIB Host B SNMP Agent Response Query Trap Managers MIB Trap Host A SNMP NMS Host C Host D Trap accumulation into NMS
You can use SNMP to derive the status of and control hosts in a TCP/IP network. In a Windows 2000 network, SNMP is an optional component.
How to use SNMP If your infrastructure has devices such as routers, switches, and hubs that are already managed and configured by using SNMP, you may need to include SNMP support for services. You would do this to allow management by an existing SNMP-based network management system (NMS).
Use SNMP to: Configure remote devices and services. You can send configuration information to a host from an SNMP NMS. Monitor network and service performance. You can track network throughput and collect information about the success or failure of data transmissions. Detect faults. You can configure alarms on network devices and services when certain events occur. When an alarm is triggered, the device or service forwards an event message to the NMS.
Software components and services that support SNMP are referred to as SNMP Agents, and have a defined MIB. Reading the MIB provides status information, and writing to the MIB reconfigures elements of a component or service. Status information can be collected as interactive data from an SNMP manager, or generated by the SNMP Agent as an SNMP trap. SNMP traps are collected at an SNMP trap manager, and you can use the information to make management decisions.
SNMP supports communities, which allows you to logically group SNMP Agents and use a community name that provides a limited form of security. The SNMP service in Windows 2000 supports the Internet MIB II, LAN Manager MIB II, Host Resources MIB, and Microsoft proprietary MIBs such as those for WINS and DHCP.
The SNMP MIB MIB describes a set of managed objects. An SNMP management console application can manipulate the objects on a specific computer if the SNMP service has an extension agent dynamic-link library (DLL) that supports the MIB. Each managed object in a MIB has a unique identifier. The identifier includes the object's type (such as a counter), and the object's access level (such as read or read/write), which is based on the community name, size restrictions, and range information.
The Event Log service, which starts automatically when you start Windows 2000, maintains separate application, system, and security logs. Event logs contain post- mortem information about the operation of applications, components, and services. Event logs are useful to calculate uptime based on service stop/starts, and to analyze errors and status changes. Event Viewer allows the examination of logs for information about hardware, software, system problems, and security events. Event Viewer can export the logs as files in various formats for analysis with tools such as Microsoft Excel.
The following table shows the types of events that would be entered in the event log, and the status provided by that event. Event Status provided ErrorHighlights problems that may cause loss of functionality, such as the failure of services. WarningIdentifies an event that may indicate a possible future problem. For example, when disk space is low. InformationDescribes a successful operation of an application, driver, or service. For example, when you reconfigure a service. Success AuditDescribes an audited security access attempt that succeeds. For example, when an authorized user reconfigures a service. Failure AuditDescribes an audited security access attempt that fails. For example, when a nonauthorized user attempts reconfiguration of a service.
Acquiring Data with Scripting and Programming Solutions Windows Script Host Programmed Applications MMC snap-in DLL Standalone application ActiveX component Cscript command line Wscript window based IIS-based ASP script Web browser script Script code Application code
Acquiring the necessary status information can require the automated accumulation of logs, or the running of command-line utilities and programs. To provide scheduled automation, a common technique is to run scripts and programs by using the AT command. Windows 2000 supports the original batch and command-line-based commands, but also provides Windows Script Host (WSH) to allow you to automate scripted tasks.
Windows Script Host Windows Script Host is suitable for non-interactive scripting needs, such as logon scripting, administrative scripting, and automation tasks. Scripts are commonly written in Microsoft Visual Basic® Scripting Edition (VBScript) or Microsoft JScript®; other languages such as Perl are also supported. You can run Windows Script Host from either the Windows-based host (Wscript.exe), or the command prompt-based host (Cscript.exe). Automation of scripts is typically done with AT and the command prompt-based host, because the scripts must run without a user logged on and in this environment, will not have access to the visible desktop. You can implement script code in Web browser-based applications as client side script or as server- side Active Server Pages (ASP) script code by using Internet Information Services (IIS).
The scripting languages support Component Object Model (COM) automation objects, which allow the script to create instances of both product and operating system service objects, and pass data between them. The Windows Script Host Reference in the Microsoft Platform SDK, documents all elements, errors, methods, objects, and properties with which you can accomplish tasks such as: Instantiation of COM components. Printing of messages to the screen. Mapping network drives. Connecting to and controlling printers. Retrieval and modification of environment variables. Reading and modification of registry keys.
Programmed Applications You can write management applications in any suitable programming language, such as Microsoft Visual Basic or Microsoft Visual C®++, to implement structured management plans. The application can implement complete or partial solutions. You can implement applications as stand-alone executables, as COM components suitable for use as an MMC snap-in, or as ActiveX® DLLs suitable for inclusion in larger management frameworks. For example, an application may be designed to provide Simple Mail Transfer Protocol (SMTP) or Messaging Application Programming Interface (MAPI)-based mail notification for critical service events by directly accessing service status data, or by creating instances of objects that provide status data.
Windows Management Instrumentation Database Application Script Application VB or C/C++ Application ODBC WSH CIM Repository CIM Object Manager COM/DCOM WMI-based Management Applications WMI Providers COM/DCOM WMI Providers Registry Data Registry Events Event Log System Monitor SNMP Win32 Windows Driver Model
You can use Windows Management Instrumentation (WMI) to acquire data on the status of services. You can acquire the service status from both local and remote Windows 2000-based computers with scripted, programmed, or database applications that access the WMI repository and providers. WMI provides a single point of integration through which you can access status information from many sources within a computer. The WMI service is started by default on Windows 2000-based computers, but must be started either manually or automatically on Microsoft Windows 95 and Microsoft Windows 98-based computers.
WMI, sometimes called Common Information Model (CIM) for Windows, is the Microsoft implementation of Web-Based Enterprise Management (WBEM) as defined by the Distributed Management Task Force (DMTF) initiative. WMI extends existing management protocols and instrumentation, such as SNMP, Desktop Management Interface (DMI), and the Common Management Information Protocol (CMIP). WMI extends the DMTF CIM to represent objects that exist in Windows management environments. Use WMI when you require scripted or programmed access to service performance counters and events, and you do not want to acquire the status information by direct intervention with the services.
WMI-based Management Applications You can develop WMI applications as standard executable files (.exe) or Microsoft Windows-based services that use information supplied by a provider or by WMI. WMI applications make requests for information and services through the WMI application programming interface (API), by instantiating COM components, or by using the WMI Open Database Connectivity (ODBC) driver access methods. For example, you can write a Web browser-based monitoring application by using the Microsoft ActiveX controls that are supplied with the WMI software development kit (SDK) to display information about one or more managed objects. An inventory application can use the WMI ODBC driver to access a database that contains management data for a local network.
You can write management applications and scripts that derive information and events from WMI to: Process events for Event logs and SNMP trap events. Access performance counters and logs. Access computer resource statistics, such as those for system memory and available hard disk space. Access application-related information, such as an inventory of current application installations on a computer. Remotely start, stop, or administer applications and services. Tip: You can obtain the programming information required to develop applications and scripts to access WMI by installing the Microsoft Windows SDK or the Microsoft SMS WMI SDK.
WMI Providers WMI providers are standard COM and Distributed COM (DCOM) servers that function as intermediaries between managed objects, and the CIM Object Manager. The providers supply data and event notifications for their specific managed objects.
The WMI providers in Windows 2000 include: System Monitor. Provides access to Windows 2000 System Monitor counter data. Registry Data. Provides access to system registry data. Registry Events. Generates events when changes occur to registry keys, values, or trees. SNMP. Provides access to events and data from SNMP devices. Windows NT Event Log. Provides access to data and event notifications from the Event Log. Win32®. Provides access to data from the Win32 subsystem. Windows Driver Model (WDM). Provides access to data and events from WDM-compliant device drivers.
To monitor servers, you must acquire status information for analysis, or set alerts to give instant notification on the monitored services. You can view data from Performance Logs and Alerts in real-time, or you can save it to disk files for later analysis.
Analyzing the Collected Data Data Analysis Manual Analysis Automated Analysis Status Analysis Response
You can generate the status of the network services from real-time data, accumulated logs, and calculated result sets. Your analysis processes must use the collected status information to create the final result set derived from accumulated data. Responses to service variations will be based on this result set.
In this lesson you will learn about the following topics: Data Analysis There are a variety of techniques and tools that you can use to analyze the status information, such as: Manual inspection of status. Manual processes must specify the source of data and the responses expected from manual interpretation. Excel. This process can define an Excel spreadsheet, which interprets the status information and provides an indication of appropriate responses. Microsoft Access or Microsoft SQL Server™. Various applications can analyze status data imported into an Access or SQL Server database. Programmed solutions. Applications written in any suitable language can analyze status information directly. Third-party solutions. Status information can be analyzed as part of a larger management plan provided by a third-party solution.
You can use point-in-time analysis to notify operations staff of current conditions, such as a service failure, or of conditions that are outside design specifications. You can use trend analysis to predict future needs and to notify operations staff when redesign or reconfiguration is required.
Manual Analysis Manual analysis is a point-in-time investigation of the status of a service or a network. You can use the analysis to direct a manual response by operations staff. Manual analysis is typically acceptable for capacity planning and prediction of required redesign or reconfiguration tasks. For example, the disk space used for the databases associated with WINS, DHCP, and DNS may be assessed on a weekly basis by reviewing collected logs. You can use the variations in database size over several weeks to predict when more disk capacity will be required.
Automated Analysis Automated analysis is required as part of an automated response system that notifies operations staff or reconfigures a system by restarting services or rerouting network paths. Specify an automated data analysis in your management strategy if: Response to failure and variations must be immediate and accomplished without staff intervention. The increased development effort to automate processes is acceptable.
The time taken to return a service to full operation is a function of the time taken to detect and respond to the failure, and the time to repair. Minimizing the time taken to detect and respond to service variations, or to providing automated responses to service variations, can reduce the impact of failures and variations. The services status information that is collected can contain data on service variations ranging from critically important failure events to capacity planning information with non-critical response requirements.
Services may respond automatically to failure. For example, if a service is installed on a Windows Cluster, changeover to a second server in the cluster is automatic. In this case, there is still the need to notify the operations staff, but with a relaxed response time adequate to repair the failed server.
In this lesson you will learn about the following topics: Reactive responses Proactive responses
Responding to Analyzed Data Reactively The availability of services is lower when using a reactive management strategy because a failure or service variation must occur to trigger the response. For example, when services fail, or when end-users report poor performance (slow response time), the operations staff are notified, and they must respond to determine the cause and complete a repair.
The following event notifications would be responded to reactively because they are generated post-event: Events generated from status logs. E-mail notifications. Help desk calls. Configuration management/monitoring systems.
Use a reactive response strategy if: Some downtime can be tolerated. Redundancy is built into the service.
Responding to Analyzed Data Proactively Proactive management strategies define processes that are designed to respond to future resource usage limits or failure of resources. These processes require that the status information is acquired over time or that active probing occurs to acquire the required data before failure occurs. This proactive derivation of status information allows potential responses to occur before service failure.
The following status information would be suitable for proactive responses: Performance analysis (load/monitor) Service analysis (load/monitor) Network traffic management Server and router congestion status Capacity planning trends Service workload simulation Service workload generation
Use a proactive response strategy if: Prior warning of capacity limitations and performance failures is essential. Downtime must be minimized.
Review Defining Management Strategies Identifying Management Processes Generating Information on the Status of the Services Analyzing the Collected Data Selecting Response Strategies