Download presentation
Presentation is loading. Please wait.
Published byCalvin Bricker Modified over 9 years ago
1
High Availability Architectures with HP Serviceguard
[Course Title] [Module Title] High Availability Architectures with HP Serviceguard Guy Patrick, HP HP Partner Technology Access Center [Rev. # or date]
2
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
3
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
4
What are the total business consequences of an outage?
[Course Title] [Module Title] Tarnished company reputation and customer loyalty Lost opportunities and revenue Idle or unproductive labor Cost of restoration Penalties Litigation Loss of stock valuation Loss of critical data July 2006 [Rev. # or date]
5
Causes of Service Outages today
[Course Title] [Module Title] Technology Failures Servers, Disks, Network Software Failures OS, Middleware, Application 20% 40% Ten years ago, technology failure (hardware) was the cause of 80% of the service outages. This has changed significantly as hardware has gotten much more reliable over the years. Human Errors Operational and Administrative Errors Errors under duress July 2006 [Rev. # or date]
6
Achieving Availability
[Course Title] [Module Title] Combination of 3 major pillars of availability Technology Reliable architectural components (servers, disks, network, etc.) Clustering software Middleware Data replication People and Processes Training Documentation Change control Support Services Rapid response time Rapid diagnosis and repair Availability of parts July 2006 [Rev. # or date]
7
High Availability versus Disaster Tolerance
[Course Title] [Module Title] High Availability Providing redundancy within a data center to maintain the service (with or without a short outage) Hardware failures Software failures Human error Disaster recovery Usually providing a remote site with similar equipment that is shared among multiple organizations (shared equipment model) Personnel fly to the site with tapes and restore the service in days or weeks Disaster Tolerance Providing redundancy between data centers to restore the service quickly (tens of minutes) after certain disasters (dedicated equipment) Power loss Fire, flood, earthquakes Sabotage, terrorism July 2006 [Rev. # or date]
8
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
9
Serviceguard for HP-UX
Single Cluster up to 16 nodes PA-RISC (9000) and Integrity servers For use when all nodes are in a single Data Center Automatic failover up to 150 application packages (up to 900 services total); supports up to 200 relocatable package IP addresses per cluster Cluster File System (CFS) support in SG versions >= (heartbeat must be over Ethernet) SCSI or Fibre Channel for disks Single IP subnet for each heartbeat network (IPv4) Multiple heartbeat networks required (2 or more) Ethernet Infiniband FDDI & Token Ring for legacy environments Local LAN failover and Auto-Port Aggregation IPv6 support for data links only File Systems and Volume Managers Journaled File System (JFS) and Online JFS Although HFS is supported, it is not recommended for mission critical applications LVM VxVM & CVM volume manager-based mirroring is optional Quorum Device Required for 2-node clusters, optional for larger clusters Cluster lock disk for up to 4 nodes only Quorum Server with up to 16 nodes July 2006
10
Serviceguard for Linux (SG/LX)
Single Cluster up to 16 nodes 2 to 4 nodes with Proliant servers using SCSI Up to 16 nodes with Integrity servers or ProLiant servers using FibreChannel For use when all nodes are in a single Data Center Automatic failover up to 150 application packages (up to 900 services total) supports up to 200 relocatable package IP addresses per cluster SCSI or Fibre Channel for disks Single IP subnet for each heartbeat network (IPv4) Multiple heartbeat networks required (at least 2) Ethernet, supporting up to 7 Heartbeat subnets Network bonding for automatic network failover File System and Volume Manager reiser, XFS, and ext3 file systems (Journaled file systems) Logical Volume Manager (LVM and LVM2) that is included in the Linux distribution RedHat Global File System (GFS) Dynamically loadable modules for Serviceguard installation Quorum Device Required for 2-node clusters, optional for larger clusters Quorum Service with up to 16 nodes Cluster Lock LUN for up to 4 nodes only July 2006
11
Multi-pathing Solutions
Multi-pathing solutions for HP-UX LVM/SLVM – PVLinks (active/standby) VxVM/CVM – DMP (dynamic multi-pathing, active/active) StorageWorks disk arrays (XP and EVA) – SecurePath (active/active) EMC disk arrays (Symmetrix and DMX) – PowerPath (active/active) Multi-pathing solutions for Linux FibreChannel HBA FibreChannel driver StorageWorks disk arrays – SecurePath EMC disk arrays – PowerPath SCSI Linux md driver July 2006
12
OS differences impacting SG functionality
[Course Title] [Module Title] Extended / Campus Cluster is not supported on Linux md implementation currently does not meet robustness requirements Data integrity protections are not as robust with Sistina LVM (Linux) Manual activation of a volume used by an SG/LX package from another server can corrupt data HP-UX volume managers support exclusive activation mode to protect against inadvertent activation from another server inside or outside of the cluster NFS fail over does not include file locks on Linux Correcting requires Linux kernel changes NFS v4 plans to support lock fail over and need distribution vendors to support it NOTE: Consider CFS or GFS instead of HA NFS. July 2006 [Rev. # or date]
13
SG cluster (local cluster – shared connectivity)
[Course Title] [Module Title] App C App A A C Cluster Lock B D App B App D All systems are physically connected to each disk Maximum cluster size is 16 nodes Each application runs on only one host at a time Hosts can run multiple applications Failover is possible to any node that is physically connected to the data 2-node cluster Quorum Service Highly Available Quorum Device that is not a member of the cluster whose quorum is being satisfied July 2006 [Rev. # or date]
14
Failover Models Active / Standby Active / Active Rotating Standby
One or more nodes are reserved for failover use Upon failover, the applications maintain performance due to spare capacity Active / Active All nodes are running (different) applications Upon failover, choice of Reduced capacity when multiple applications run on the same node Shutdown less critical applications Optional use of VSE technologies to guarantee resource entitlements Rotating Standby Upon failover, the standby system becomes the new production system and the repaired system becomes the new standby system Active / Active (distributed application) All nodes are running an instance of the same application (e.g., RAC) Depends on shared read/write access to the data No failover of the application Upon failure of a node (or instance), the users are sent to the remaining nodes July 2006
15
SG cluster – preventing split brain
[Course Title] [Module Title] App C App A App A A C B D App B App D Each “sub-cluster” tries to form a cluster and run all of the applications Two instances of the same application write to the same disks, resulting in data corruption July 2006 [Rev. # or date]
16
SG cluster – preventing split brain (continued)
[Course Title] [Module Title] App C App A A C Cluster Lock B D App B App D Each “sub-cluster” tries to acquire the cluster lock on the cluster lock disk/lock LUN The algorithm guarantees that only one sub-cluster will get it One sub-cluster is forced to crash to prevent data corruption 2-node cluster Quorum Service Highly Available Quorum Device that is not a member of the cluster whose quorum is being satisfied July 2006 [Rev. # or date]
17
Quorum Service (QS) (HP-UX and Linux)
[Course Title] [Module Title] Alternative quorum arbitration method Supports up to 50 clusters and maximum of 100 nodes TCP/IP network connection required (Not required to be in the same subnet, although recommended to minimize network delays) Stand-alone HP-UX or Linux-based server(s) outside of the Serviceguard cluster whose quorum is being satisfied Runs as a real-time process The Quorum Service (QS A.02.00) can be configured in a package in a cluster cannot reside in the same cluster that uses it do not configure two clusters that use the same Quorum Service package Bonding (Linux) or APA (HP-UX) can be used to increase network availability to the Quorum Service July 2006 [Rev. # or date]
18
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
19
What is the SG Cluster File System?
[Course Title] [Module Title] True multi-reader / multi-writer CFS Applications on different nodes within the cluster can access the same files simultaneously Similar to distributed raw volumes in SG today, except with file systems Applications are responsible to ensure that simultaneous access does not result in application (logical) data corruption Does provide locking functions with POSIX file system locking semantics (lockf, similar to multiple users accessing the same file on a single system) POSIX file system semantics are advisory only Locks can refer to regions of a file July 2006 [Rev. # or date]
20
Storage Options in a Serviceguard Cluster
[Course Title] [Module Title] CFS Storage Cluster Node A Cluster Node B /project /proj2 /proj1 VxFS /mnt1 VxFS /mnt2 VxFS The Serviceguard integration with Veritas CVM/CFS will provide an additional option to store and share application data within a Serviceguard cluster to the options we already have today with Serviceguard. Besides the exclusively shared filesystems (/mnt1 and /mnt2) shown in red that could only be accessed by one node at a time, Serviceguard also provides concurrent shared access for application data which is stored on raw volumes - as shown in blue. Raw volumes are commonly used with Oracle OPS/RAC implementations in which Serviceguard and SLVM or CVM provides concurrent access to the volumes and Oracle coordinates the simultaneous access to them using a distributed lock manager. With the introduction of a CFS, as shown in green, we can combine the advantages of nearly raw volume performance with the ease of management of a file system. In addition to CVM/VxVM and CFS, other VERITAS products that can enhance performance or manageability of application data in a Serviceguard cluster will be offered to our combined customers. These are products which VERITAS offers today as Storage Foundation Bundles. Please note that the CFS will only be available for application data and not for the root file system. Each cluster node will retain its dedicated boot/root disk. Raw Volume Local Root Local Root A raw volume can be exclusively activated and failed over OR concurrently accessed A cluster file system is concurrently accessed by all cluster nodes. A failover file system is exclusively activated by one node and can transition between nodes. July 2006 [Rev. # or date]
21
Management GUI VERITAS Enterprise Administrator (VEA)
[Course Title] [Module Title] VEA manages CFS mount points and VxVM and CVM volumes In HP-UX 11iv3 (11.31), we expect that fsweb will launch VEA July 2006 [Rev. # or date]
22
Adoption of Serviceguard CFS
[Course Title] [Module Title] Adoption of Serviceguard CFS Overall message: No forced transition Transition can occur in customer’s timeframe LVM/SLVM will continue to be supported and enhanced We don’t expect all customers or ISVs to have a need for the CFS functionality Benefits of a CFS vary with the application and must be evaluated on a case-by-case basis There are targeted applications where we expect to see SG/CFS adoption ISVs will not need to re-certify with the new version of Serviceguard ISVs may want to take advantage of the CFS, and that may require development effort for: Installation / configuration processes and scripts Application-level locking when sharing data and files July 2006 [Rev. # or date]
23
HP SG/LX & Red Hat GFS Overview
[Course Title] [Module Title] HP SG/LX & Red Hat GFS Overview HP and Red Hat announce support for highly available, manageable, and scaleable Linux clusters that combine leading technologies: HP Serviceguard for Linux for high availability Red Hat Global File System (GFS) for a single cluster-wide file system Product details HP Serviceguard Linux & Red Hat GFS SW (T2798AA) # year SGLX + GFS SW update subscription LTU #003 – 3 years SGLX + GFS SW update subscription LTU + July 2006 [Rev. # or date]
24
Serviceguard Long-term Strategy
[Course Title] [Module Title] Serviceguard SG is HP’s strategic clustering product for HP-UX and Linux VCS: Veritas Cluster Server VCS is NOT part of any bundle being offered by HP Volume Managers LVM will continue to be the default volume manager LVM and SLVM will continue to be supported and enhanced CVM will be required when using the CFS Multiple volume managers can be used in the same cluster July 2006 [Rev. # or date]
25
Compatibility with HP-UX Versions
[Course Title] [Module Title] Serviceguard CFS will be supported with: HP-UX 11i v2 September 2004 release and greater Serviceguard (Q3/2005) VERITAS Storage Foundation 4.1 delivered by HP Both HP 9000 and HP Integrity Server At this time, SG/CFS will NOT be supported with: HP-UX 11i v1 Earlier versions of HP-UX 11i v2 HP-UX 11.0 or earlier July 2006 [Rev. # or date]
26
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
27
Serviceguard Manager Features Overview
[Course Title] [Module Title] View and manage up to 50 HP-UX and Linux Serviceguard clusters and up to 100 nodes Graphical user interface (Java based) Multiple subnet support Status badges and tool tips Property sheets Auto refresh (Polling) Large scale cluster display Cluster, node and package administration Run and halt clusters, nodes and packages Change package and node switching parameters Package drag and drop Alerts panel and event browser Extensive online help July 2006 [Rev. # or date]
28
Serviceguard Manager Features Overview (continued)
[Course Title] [Module Title] Serviceguard Manager Features Overview (continued) Configuration Cluster create/modify/delete Package create/modify/delete Operation Log (Progress messages) for configuration and administration operations Role based access Management of cluster and package access policies Administration for non-root user Integrated with Openview Operations 8.0 Integrated with HP SIM 4.1 Serviceguard Manager can run as client on HPUX, Linux and Windows platform Free! July 2006 [Rev. # or date]
29
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
30
Serviceguard and Workload Manager
[Course Title] [Module Title] Workload Manager (WLM) WLM allows the specification of SLOs for SG packages that may not be active on the system. Each SLO is conditional on which server the package active When a failover or package movement occurs, WLM detects it and enforces the SLO – the package gets the priority and the resources specified WLM automatically activates/deactivates TiCAP processors to reduce the performance impact of an application failover in active-active single or multi-site disaster tolerant configurations Using WLM on a Pay per use (PPU) basis reduces the cost of active-standby single or multi-site disaster tolerant configurations WLM, with its ability to automatically re-allocate system resources, allows you to get the most from your systems.both before and after Serviceguard moves applications. July 2006 [Rev. # or date]
31
Resource management of your adaptive infrastructure
[Course Title] [Module Title] nPar 1 nPar 2 vPar 1.1 resource partition 1.1.1 resource partition 1.1.2 resource partition 1.1.n vPar 1.2 resource partition 1.2.1 resource partition 1.2.2 resource partition 1.2.n vPar 2.1 resource partition 2.1.1 resource partition 2.1.2 resource partition 2.1.n vPar 2.2 resource partition 2.2.1 resource partition 2.2.2 resource partition 2.2.n 1 2 3 When app in RP fails SG starts it in RP 1.1.2 1 - WLM reallocates resources across RPs inside the vpar If that is not enough 2 – WLM pulls CPUs from vPar 1.2 and/or from nPar 2 If that isn’t enough 3 – WLM activates Temporary Capacity Processors to meet the demand July 2006 [Rev. # or date]
32
Integration of Workload Manager and Serviceguard
[Course Title] [Module Title] WLM SLO Condition Statement Allows for the turning on/off of SLOs based on time, day, date or some event on the system WLM Serviceguard Toolkit Shipped with the WLM product Notifies WLM when a named Serviceguard package is activated on the system sgpkgactive command outputs activation status of the named package at each WLM interval This info is passed to WLM using the wlmrcvdc scripting toolkit, which is also shipped with the product July 2006 [Rev. # or date]
33
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
34
HP Serviceguard Developer’s Toolbox
[Course Title] [Module Title] HP Serviceguard Developer’s Toolbox Framework to facilitate quick and painless integration of your application with HP Serviceguard (Linux and HP-UX) Toolbox (delivered as a .zip file) includes A standardized integration template written in Posix Shell. Template may be customized for applications managed by HP Serviceguard in either HP-UX or Linux environments Validation guidelines and test tool Documentation on Template design Customization tips Best practices guidelines for integration Examples To meet the growing demand for toolkits, HP has enabled ISVs to quickly develop Serviceguard integration scripts with a validation loop back to the HP development team. July 2006 [Rev. # or date]
35
Serviceguard (HP-UX) Toolkits
Enterprise Cluster Master Toolkit (ECMT) Fully-tested and supported collection of integration templates for certain popular third-party applications Supported Oracle 10g – as of December 2004 HA NFS Toolkit Pre-tested and supported templates to make NFS servers highly available July 2006
36
Serviceguard (Linux) Toolkits
Fully tested and supported integration templates Oracle9i DBMS Oracle10g DBMS NFS Contributed Toolkits (pre-tested templates for popular 3rd party applications) File System Samba Database PostgreSQL MySQL Other Sendmail Apache Tomcat July 2006
37
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
38
Serviceguard extensions for SAP (SGeSAP) and (SGeSAP/LX)
Integrate SAP R3 with: Serviceguard Toolkit template for easily configuring SAP with Serviceguard (HP-UX and Linux) Options to on how to configure the Central Instance (CI) and the Database (DB) servers Metrocluster (HP-UX only) Optional template to create a disaster tolerant architecture for SAP (Configuration example shown in the Metrocluster section of this presentation) + July 2006
39
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
40
Oracle 10g Support (single instance)
[Course Title] [Module Title] HP supports Oracle 10g (single instance) configurations that use: Serviceguard cluster membership for High Availability configurations HP-UX supported volume managers to store Oracle data LVM, VxVM and CVM/CFS The SG package manager to provide HA for applications running on the same cluster as Oracle 10g single instance 10g with HP Cluster Membership and Storage Mgmt Integration SG code Oracle 10g and SG Managed Applications SG Package Manager We recommend the solution shown on this slide as the best one for providing the best high availability solution for both the database and the applications. For providing cluster membership services to the RAC database server, Oracle allows the customer the choice of using the built-in cluster membership capability, or utilizing the platform clusterware (such as SG/SGeRAC) in the Enterprise Edition. In this case, SGeRAC is used to provide the superior features mentioned on the previous slide such as better network monitoring. In this case, the HP file system and volume manager is used, rather than using ASM. This also provides the best configuration for customers that are interested in a Disaster Tolerant solution with Oracle 10g. With SGeRAC installed and using the HP file systems and volume managers, you could create an Extended Distance cluster for a dual site data center solution. HP File Systems/ Volume Mgr HPUX SG Clustering Components July 2006 [Rev. # or date]
41
Oracle 10g RAC Oracle Real Application Clusters (RAC) 10g is an option to Oracle 10g Enterprise Edition Differences from the previous Oracle9i RAC product include: Clusterware is built-in (formerly known as Cluster Ready Services or CRS) Automatic Storage Management (ASM) software can be used to manage storage for the RAC database Performance improvements “Zero downtime” (rolling upgrade) for certain Oracle patches NOTE: Oracle allows customers the choice of using the built-in Oracle cluster membership capability or utilizing platform-specific clusterware such as SGeRAC July 2006
42
SGeRAC with Oracle10g RAC
[Course Title] [Module Title] Increases availability for non-RAC processes and applications running on nodes within the RAC cluster Oracle Clusterware focuses only on Oracle-specific processes and resources 10g R2 Clusterware is expected to provide a package manager and API for protecting non-Oracle processes Is required when using a volume manager Greater flexibility when working with LUNs Features such as OS mirroring, striping, etc. SLVM or CVM provides concurrent access to the same storage by multiple nodes Provides reliable node membership information through Tight kernel integration Real-Time priority execution Improves network reliability through monitoring and failover management of User LAN failover Cluster interconnect failover IPv6 networks Oracle Clusterware is focused only on Oracle specific processes and resources. Any 3rd party application running on one of the nodes in a RAC cluster, and requiring high availability, must be configured with SGeRAC. SGeRAC is also required for customers who choose to use HPUX Shared Logical Volume Manager (SLVM) or the Veritas Cluster Volume Manager (CVM). SGeRAC provides the services needed to allow these volume managers to provide multiple nodes with concurrent access to the same storage. Although CRS can provide its own membership services in situations where there is no other clusterware on the system (most common in commodity OS such as Linux), it will look for SGeRAC on HPUX systems. If it finds SGeRAC installed on the system, CRS will link in the appropriate libraries and use SGeRAC for node membership information. SGeRAC is tightly integrated with the kernel and runs at a Real Time priority to provide reliable node membership information even on very busy systems. This is something CRS alone can not do. CRS is entirely a user space application and there are no guarantees it will get enough CPU time to keep adequate heartbeat information. SGeRAC can improve the reliability of the network under CRS. If a backup NIC is configured, SGeRAC can failover the LAN and all of its associated IP addresses (including the VIP) so that client connections won’t know the difference. SGeRAC can also be used to monitor and provide increased availability for other networks on the system, including the cluster interconnect and IPv6 networks. Since CRS can not detect network failures, having SGeRAC do the monitoring and the local LAN failover will eliminate any single point of failure in the NW fabric. SGeRAC also provides integration with a variety of other HP software products. For example, SGeRAC can work very closely with HA monitors provided by EMS (Event Monitoring Service). EMS monitors resources and notifies SGeRAC when that resource reaches a critical user-defined level. SGeRAC can then fail over a package to another node where there is a more appropriate level of resources. WLM will work for any application and will manage all of the resources available on the system. It is a general purpose resource control system that optimizes a workload by bringing the resources needed to meet one or more specific SLO’s. Customers optimize system utilization not only for the Oracle deployment, but for the entire system. Oracle workload management provided with CRS and Services is limited only to Oracle processes and is limited only to the resources Oracle is given. So, WLM can divide the processing power on the system between Oracle and other applications (including multiple Oracle instances). Oracle can then determine how to distribute its allotment of processing power among its various client requests. July 2006 [Rev. # or date]
43
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
44
HP Serviceguard for Linux
[Course Title] [Module Title] Offers a completely integrated high availability clustering solution available on a range of storage and servers that provides efficient, continuous access to mission critical applications, information, and services Designed, developed, delivered and supported by bringing enterprise–strength high availability to the Linux platform – x86 and Integrity servers Companies are looking to Linux to help meet decreasing budget requirements, but don’t want to sacrifice stability. HP Serviceguard for Linux leverages the best-in-class HP-UX mission critical technology to offer high availability on Linux platforms, with solid HP support behind the entire solution. + July 2006 [Rev. # or date]
45
HP Serviceguard for Linux Configurations
[Course Title] [Module Title] Enterprise Linux distributions: Whether you manage an IT environment at a department level or an entire data center, Serviceguard for Linux will ensure that your data is highly available. From Smart Array Cluster Storage to MSA, VA, EVA and XP, HP has a solution that meets your storage requirements. In addition, HP supports leading 3rd party servers and storage. Enterprise clusters Infrastructure clusters Plugged into the data center fabric to maximize scalability and availability with extended distance clustering option for disaster tolerance (EVA 3000, EVA 5000, XP48, XP128, XP512, XP1024, EMC Symmetrix) Availability Departmental clusters Flexible and scalable entry-level fibre channel cluster powered ProLiant and Integrity servers and Virtual Arrays (MSA1000, MSA1500cs VA7xx0, EMC CLARiiON) Simple and affordable, powered by ProLiant servers and Modular Smart Array 500 G2 (ProLiant only) Scalability/Performance July 2006 [Rev. # or date]
46
Winner best clustering solution!
[Course Title] [Module Title] HP’s Virtual Server Environment for Linux was named Best Clustering Solution in the LinuxWorld Products Excellence Awards program The awards recognize important innovations in Linux and open source technologies HP released and demonstrated the first version of VSE for Linux on HP Integrity Superdome HP gWLM provides the policy engine to allocate virtual server resources in a Linux operating system HP Serviceguard for Linux provides the high availability clustering component of the solution LinuxWorld Feb 2005 Not only are we see acceptance from customers, but we also recently won an award from the Linux community at LinuxWorld, winning Best Cluster solution at Linux World last year for the 1st version of the Virtual Server Environment we demonstrated on a Superdome server. The gWLM component of the solution required scheduling functionality that was not available until the 2.6 kernel release, but once that was there, we were able to make the solution available and demonstrate it to much interest. July 2006 [Rev. # or date]
47
HP Serviceguard for Linux and cluster extension
[Course Title] [Module Title] Disaster recovery to protect against the risk of downtime, whether planned or unplanned Automatic failover/failback to reduce the complexity involved in a disaster recovery situation Ensures the highest standards in data integrity by leveraging the inherent advantages of HP StorageWorks XP or EVA disk array remote mirroring World’s first DR solution on Linux! IP network(s) IP network(s) MAN/WAN Disaster recovery solution to protect against the risk of downtime and data unavailability, whether planned or unplanned. Leveraging the robust remote mirroring capability of continuous access, cluster extension xp and eva will confirm that data will be available at a remote location up to metropolitan-wide distances, to ensure business continuity, all without downtime or performance impact! (The key word is confirm. Without this confirmation, cluster failover will occur independently from continuous access remote mirroring capability, thus requiring a labor-intensive, manual process to monitor both the cluster failover and as well as the remote mirroring process, and for the administrator to be knowledgeable about what to do in every disaster scenario. Cluster extension xp and eva are the integral links that bring the capabilities of cluster software and continuous access XP remote mirroring together to offer a true disaster recovery solution.) Automatic Failover/failback to reduce the complexity involved in a disaster scenario. Cluster extension seamlessly integrates into SG for Linux and utilizes fast failback functionality of continuous access to provide automatic, fast, and efficient failover and reliable recovery. (Scenario with fast failover/failback. In a failover scenario, failover no longer requires splitting the mirror, and as a result, is done more quickly and simply. The process involves “swapping the personalities” of the primary and secondary arrays. In other words, the secondary site is the new primary and is given both read/write access. Upon failback, a full copy is no longer required. Rather, “swapping the personalities” can be done once again. All in all, fast failover/failback offers quicker, efficient failover/failback processes.) Ensures the highest standards in data integrity. Cluster extension utilizes the full extent of continuous access remote mirroring to ensure the highest standards in data integrity. (Choices include: Synchronous copy mode for lock-step, real-time mirroring. Or Asynchronous copy mode for the highest performance in remote mirroring. HP’s asynchronous copy mode includes a unique sequence-time-stamping features that ensures the ordering of I/Os, which ultimately ensures data integrity at the remote site. Continuous access can also be deployed over a wide range of network connectivity, ranging from direct ESCON or fiber channel connects, or extended over wider distances through Dense Wave Division Multiplexing (DWDM) or converters.) Cluster Extension for Serviceguard on Linux was the World’s first disaster recovery solution for Linux clusters. Storage network(s) MAN/WAN Mirror Data Data center B Data center A Metropolitan-wide distances or farther July 2006 [Rev. # or date]
48
Agenda High Availability Overview
High Availability Products and Solutions Serviceguard Clustered File System Cluster Management Serviceguard and Workload Manager (WLM) Serviceguard Application Integration Serviceguard Extension for SAP (SGeSAP) Serviceguard and Oracle 10g Serviceguard for Linux (SG/LX) Summary July 2006
49
HP’s advantage [Course Title] [Module Title] HP uses its vast experience with complex systems to provide the most effective availability solutions in the industry In summary, HP’s advantage in the Linux HA when compared to our competitors is as a result of world wide availability, the knowledge base created as a result of working with customer mission critical environments for over 10 years, have solutions that span multiple operating systems with a service & support portfolio that can meet any availability requirement, while selecting strategic opportunities to partner. Global HA supplier Multi-OS solutions End-to-end service and support portfolio Strategic partnerships July 2006 [Rev. # or date]
50
For more information on HP’s High Availability offerings …
Reference Links: HP Serviceguard Developer's Toolbox July 2006
51
DSPP Tools & Resources for Itanium®2 Architecture Set You Up for Success
Community Itanium® architecture forums, source code repository, document sharing and mailing lists Equipment rentals and purchase discounts Partner Resources News & Events Software development environments, compilers, operating systems, installation/configuration tools, performance tools and more Technical documentation white papers, tutorials, references documents and manuals, FAQ’s, known problems, sample code, etc. Training and Education online and classroom training July 2006
52
Where to go … Software Developer Resource Kit for the Intel® Itanium®2 microarchitecture: Development and Business Resources from HP & Intel for HP Integrity-based solutions: Contact points for additional information, general support, equipment, localization resources and more: Americas telephone Europe telephone Asia-Pac or go to for local country contacts July 2006
53
Complete Survey to Win HP & Intel are giving away an HP laptop
to 1(one) lucky winner!! Promotion Period ends August 20, 2006 Attend a webcast AND complete the post-event survey. Full promotion details can be found on DSPP at: July 2006
54
VTune webcast replay is now available at:
More Events Tuesday, August 22 – MPI Libraries Tuesday, September 19 – Caliper Update Tuesday, October 24 – Open MP Sign up for the DSPP newsletter to get the latest webcast information sent to you directly. Webcast replays may also be found at: VTune webcast replay is now available at: July 2006
55
Intel® Early Access Program - Technology
The Early Access Program (EAP) gives you access to Intel® technology to support your current development cycle as well as early access to tools and information on new technologies. Your membership includes: Early access to pre-release software development platforms Access to Intel and 3rd party software and testing tools Training through Intel® Software College and Web events Technical content and how–to articles Protected remote access to easily evaluate and develop software safely and securely on platforms over the Internet July 2006
56
Intel® Early Access Program -Marketing Opportunities and Support
Extensive marketing and business development opportunities: Inclusion in online and print versions of the Intel® Developer Solutions Catalog Intel quotes to support your PR Case studies Access to Intel’s event marketing asset kit Participation in selected industry events and trade shows Support in your development efforts provided through: Access to an Intel Account Representative who will act as your primary contact Intel® Premier Support for confidential technical support 24/7 online support via July 2006
57
Related Intel® Resources
Intel® Early Access Program Intel® Software Network Intel® Software College Intel® Software Development Tools Experience Intel® Itanium® 2 Architecture July 2006
58
Q&A Session: To ask a question over the phone, press *1 on your touch-tone telephone. July 2006
59
Student Notes: [Course Title] [Module Title] [Rev. # or date]
July 2006 [Rev. # or date]
60
Reference Slides [Course Title] [Module Title] [Rev. # or date]
July 2006 [Rev. # or date]
61
Serviceguard feature differences by OS (1 of 2)
[Course Title] [Module Title] Feature HP-UX Linux Cluster lock Disk LUN RS-232 heartbeat Yes No Exclusive activation (volume manager) IPv6 support Status via SNMP NFS lock failover Cluster lock – no dual lock on Linux (see white paper on cluster arbitration for details: docs.hp.com) RS232 originally supported for low cost 2 nodes, current cost differences are much smaller HP-UX LVM provides exclusive activation. For Linux need to define processes to prevent until alternative solutions are available. SG Manager GUI configuration will be available for both with the release of SG/LX A.11.16 IPv6 support will be provided on Linux when market demand warrants & available resources permit ATS being phased out of HP-UX, so no expected demand for Linux Moving both HP-UX and Linux to WBEM instead, bypassing SNMP for Linux NFS lock failover function of Linux NFS versions NFS v2 & NFS v3 have no lock failover support (NFS v4 will provide this feature) July 2006 [Rev. # or date]
62
Serviceguard feature differences by OS (2 of 2)
[Course Title] [Module Title] Feature HP-UX Linux Multi-path SecurePath/AutoPath Qlogic SecurePath (check dist & errata) Mirrored boot disk MirrorDisk/UX Smart array HW RAID md SW RAID (check docs) Maximum LUN support 11.0 – 4,000 11.11 – 8,000 11.23 – 16,000 RH 2.1 – 119 RH 3 – 256 SUSE/UL - 128 NIC support Ethernet, TR, FDDI, ATM, X.25 Ethernet only Redundant networking Local LAN switch/APA Linux bonding Maximum disk support based on OS differences Linux recommendation of Qlogic, but SP supported for some specific errata Need to confirm mirrored boot disk information Only ethernet NICs are supported for Linux on HP servers Linux bonding provides LAN HA – one of the bonding modes (bonding for load balancing) utilizes all of the connections in the bond, but not necessarily as effectively as APA. With the 2.6 kernel, there will be more bonding options that should be allowed by SG/LX once approved by ISS. These will be closer to APA and are called balance-tlb and balance-alb. July 2006 [Rev. # or date]
63
Serviceguard offerings differences by OS
[Course Title] [Module Title] Serviceguard offerings differences by OS Offering HP-UX Linux DB2, Informix, Sybase, (ECMT) toolkits Yes No PostgresSQL, Sendmail toolkits Continentalclusters Metrocluster (EVA, CA/XP, EMC SRDF) Extended / Campus Cluster Cluster Extension XP Serviceguard Extension for RAC (SGeRAC) Virtual Server Environment (VSE) (gWLM available for Integrity Linux) Partial VERITAS VxVM/CVM support Cluster File System Yes – VERITAS CFS Supported by HP Red Hat GFS Integrity SCSI support Serviceguard Extension for Faster Failover (SGeFF) Monitoring Linux demand for Informix, Sybase, DB2, Tomcat toolkits Sendmail, ProgreSQL toolkits are key free applications on Linux. Not as popular/available on HP-UX Metrocluster with Continuous Access / SRDF - CLX provides equivalent functionality Oracle 9i / 10g RAC Linux version of Oracle RAC includes own clusterware HP supports Oracle RAC on Linux Serviceguard cluster for applications on separate servers VSE – gWLM will provide similar functionality to VSE in the future Veritas VxVM/CVM support - Linux LVM provides features required for Serviceguard at no additional cost Integrity SCSI support – current Linux MD driver implementation impacts the feasibility and stability of a JBOD configuration – SCSI provided for IA-32 with MSA500 SGeFF – monitoring Linux demand July 2006 [Rev. # or date]
64
Serviceguard CFS Bundles Overview (HP-UX)
[Course Title] [Module Title] Serviceguard CFS Bundles Overview (HP-UX) Products Bundle SG 11.17 + ECMT VSF 4.1 Enterprise Tools DB Tools CVM/CFS SGeRAC T2771BA – SGSM X T2772BA – SGSMP T2773BA – SGSMO T2774BA – SGSMOP T2775BA – SGCFS T2776BA – SGCFSO T2777BA – SGCFSRAC - Enterprise tools *QoSS *Snapshot *checkpoint - DB Tools (are part of all bundles which have Oracle in the name) *quickI/O * ODM * VxDBA - CMV/CFS (includes all Enterprise Tools) ClusterFS - RAC Option to all bundles: VERITAS Volume Replicator (VVR) July 2006 [Rev. # or date]
65
Serviceguard CFS bundle details
[Course Title] [Module Title] Serviceguard CFS bundle details Feature T2771BA SGSM T2772BA SGSMP T2773BA SGSMO T2774BA SGSMOP T2775BA SGCFS T2776BA SGCFSO T2777BA SGCFSRAC Dynamic Multi-Pathing 3.5 X Storage Provisioning Templates 4.0 Configuration Backup & Restore 4.0 Storage Expert 4.0 Online Administration 3.5 2TB File System 3.5 History Log 4.0 Online Intent Log Resize 4.0 Named Data Streams 4.0 Online LUN Resize 4.0 64 Bit File Systems 4.0 Portable Data Containers 4.0 Hardware Assisted E-Copy 4.0 Multi-Volume File System 4.0 Block Level Incremental Backup Support 3.5 The next few slides is more of a quick reference of all the features you saw. These slides will describe for you the function that the feature delivers and the benefit of it. Then in the last column will indicate this SG bundle you will find the feature. July 2006 [Rev. # or date]
66
Serviceguard CFS bundle details (cont.)
[Course Title] [Module Title] Serviceguard CFS bundle details (cont.) Feature T2771BA SGSM T2772BA SGSMP T2773BA SGSMO T2774BA SGSMOP T2775BA SGCFS T2776BA SGCFSO T2777BA SGCFSRAC NBU Advanced Client Support 3.5 X Quality of Storage Service (QoSS) 4.0 Quick I/O Oracle Disk Manager (ODM) 3.5 VxDBA GUI 3.5 Storage Mapping 3.5 Storage Rollback 3.5 Database (FlashSnap 3.5) Storage Checkpoint (FlashSnap 3.5) Disk Group Split and Join (FlashSnap 3.5) Instant Volume Snapshots (FlashSnap 4.0) Fast Mirror Resync (FlashSnap 3.5) Checkpoint Rollback 4.0 I/O Fencing 4.0 Cluster File System Volume Replicator 4.0* (optional with all bundles) The next few slides is more of a quick reference of all the features you saw. These slides will describe for you the function that the feature delivers and the benefit of it. Then in the last column will indicate this SG bundle you will find the feature. * VVR – not included in Sept’05 SG-CFS release, available first half calendar year FY06 July 2006 [Rev. # or date]
67
Serviceguard Configuration
[Course Title] [Module Title] The Application Package defines: How to startup the application How to shutdown the application How to monitor the application What resources are used by the application Nodes package can run on Networks required by package Disk Volume Groups required by package Services to monitor User-defined resources to monitor July 2006 [Rev. # or date]
68
Protecting against split brain and data corruption
[Course Title] [Module Title] ServiceGuard uses a “tie-breaker” or Quorum Device to prevent “Split-Brain” of the cluster Cluster lock disk (HP-UX) or Cluster LUN (SG/LX) single cluster lock disk (when all servers are in a single data center) dual cluster lock disks (when the servers are distributed across two data centers) Quorum Server a (small) server that is outside of the cluster Without a tie breaker, split-brain can occur when: a network failure splits the cluster into 2 equal halves -OR- exactly half of the servers in the cluster fail all at once Unless split-brain is prevented, data corruption will occur if the application runs concurrently on both “halves” of the cluster and modifies the same single copy of the data July 2006 [Rev. # or date]
69
SGeSAP Feature Comparison
[Course Title] [Module Title] SGeSAP Feature Comparison Feature SGeSAP/UX B on PA-RISC B7885BA) SGeSAP/UX B.03.09on IPF (T2357BA) SGeSAP/LX A on IA-32 (T1227AA) SGeSAP/LX on IPF (T2392AA) Supported SAP kernels 3.0x, 3.1x, 4.0x, 4.5x, 4.6x, 6.00,6.10,6.20 6.30,6.40 4.6x, 6.00,6.10,6.20 4.6x 6.00, 6.10, 6.20 4.6x, 6.00, 6.10, 6.20 6.30,6.40 FY04 Supported database technologies Oracle SAPDB Informix, UDB (DB/2) Oracle SAPDB Central System setup support (dbci packages) X Q2/04 DEV/QA/AppServer System shutdown on failover nodes Mutual failover support (db, ci packages) Application Server Instance packaging (APP pkgs) SAP System consolidation on cluster nodes Livecache integration (LC packages) FY04 HP Somersault support ( X ) SAP Replicated Enqueue Support support with SAP kernel 6.40 July 2006 [Rev. # or date]
70
Cluster Lock Disk (HP-UX Only) Cluster Lock LUN (Linux Only)
A special area on an LVM disk located in a volume group that is shareable by all nodes in the cluster When a node obtains the cluster lock, this area is marked so that other nodes will recognize the lock as “taken.” A cluster lock disk for HP-UX can be employed as part of a normal volume group containing user data Lock requirements Usable for clusters between 2 and 4 nodes Greater than four nodes – lock disk is not allowed; however a quorum service may be used Cluster Lock LUN Alternative to QS Similar to SG lock disk (on HP-UX) Requires exclusive use of a single LUN (Linux partition of size 100K) July 2006
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.