Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 F5 Users Group September 15 th 2010 Agenda Intro & welcome to the Phoenix F5 User’s Group Overview of Enterprise Manager and APM Open discussion (as.

Similar presentations


Presentation on theme: "1 F5 Users Group September 15 th 2010 Agenda Intro & welcome to the Phoenix F5 User’s Group Overview of Enterprise Manager and APM Open discussion (as."— Presentation transcript:

1 1 F5 Users Group September 15 th 2010 Agenda Intro & welcome to the Phoenix F5 User’s Group Overview of Enterprise Manager and APM Open discussion (as selected by audience, below are possible suggestions)  F5 High Availability Configurations. Options & Best Practices  Management, Monitoring and Logging Practices  VMware vMotion / VMware Plug-in  iHealth Beta Demo  What’s new in v10.2? Next meeting topics and discuss board of director positions. iPad Raffle

2 2 Enterprise Manager Unified & Centralized Management Reporting –Predefined reports –User generated reports –Exportable (pdf, csv) Views –Node/Pool Member Views –Easy access for Enable/Disable

3 3 Over 90% of customers who have not implemented Enterprise Manager use an Excel spreadsheet for device inventory Fast accurate device inventory Device Inventory “It takes about 3 – 4 minutes per device on average every time someone needs a serial number, version, active-standby state. Multiply that by how many devices that we have in production.” - Network Engineer, Fortune 50 Co.

4 4 Custom Views – Drag and Drop

5 5

6 6 *Concept slides Not actual screenshots

7 7 Leverage Enterprise Manager to collect all necessary files to open a support case Key Information Delivered Problem Isolation & Troubleshooting Made Easier Compare current configuration to archived version to ensure unauthorized changes have not been made Configuration Comparison Schedule nightly archive of configuration files & the ability to restore to that configuration if needed Configuration Archive & Restore Set thresholds on crucial metrics such as CPU utilization, bandwidth, etc & be alerted when the threshold is traversed Set Threshold Alerts Engaging F5 Technical Support No Config Syncs

8 8 Simplify software upgrades Enterprise Manager Does the Work for You v 9.4.8 v 10.1 v10.2 Take Advantage of all of the new features in BIG –IP Version 10.2 including: Long-Distance vMotion Optimization XML Content based routing / XML switching TACACS+ Accounting Improved System Performance & Higher Availability “Enterprise Manager helped me reduce the time it takes me to upgrade devices by 2/3. I no longer have to baby sit the process. I create a task for upgrade and move on to something else.” - BIG-IP Administrator, Fortune 50 Co.

9 9 Arizona 9.x End of Software Support begins 3/2011

10 10 ASM Policy Management One stop Policy Management Solution Enterprise Manager can manage the change and deployment of ASM security policies as well as update Live Signatures Single place for administrators to modify Security policy changes No longer need to log into each individual BIG-IP ASM to update the security policy Insures consistency of security policy across ASM deployment Live Signatures are updated by the Enterprise Manager Enterprise Manager can manage the change and deployment of ASM security policies as well as update Live Signatures Single place for administrators to modify Security policy changes No longer need to log into each individual BIG-IP ASM to update the security policy Insures consistency of security policy across ASM deployment Live Signatures are updated by the Enterprise Manager Phoenix Data Center BIG-IP ASM Administrator WebSharePointAccounting LTM/ASM WebSharePointAccounting LTM/ASM New York Data Center Enterprise Manger 2.0 BIG-IP ASM Security Policy BIG-IP ASM Live Signatures

11 11 Reduce workload by 50% Staged Changesets for single devices and Templates for multiple devices for provisioning and configuration management “By creating templates and allowing my operations group to create changesets based on them, I have reduced my workload by 50%” BIG-IP LTM Administrator, Fortune 500 Financial Co. Renew SSL certificates Create new deployments Disable / Enable pool members

12 12 Predefined Task Wizards  Automate common management tasks  Manage user accounts across your network

13 13 Node Enable/Disable Pool CPool B Pool A Pool D A Simplified Network Now you have the ability to use EM to enable, disable, & force offline pools and pool members

14 14 Create tasks to correspond with your maintenance activity

15 15 Operators can then match against your change ID and EM job ID to perform task and mark as finished

16 16 Configuration Management and Performance Monitoring Thresholds and Alerts Granular control to set the Alert Multiple methods to deliver alerts: email, snmp and syslog Track File System capacity and usage Dynamically illustrate system impact of monitoring configuration

17 17 Granularity in the Collection UI Device and traffic statistics “The Performance Monitoring Module gives me data I need for not only the F5 LTM devices but configuration objects such as Virtual Servers and Nodes. The granularity of this visibility is what I value most BIG-IP LTM Administrator, Fortune 500 Financial Co.

18 18 Pricing and Packaging Enterprise Manager 4000 –8 devices managed –F5-EM-4000-R –List price $12,995 USD Add on Device Packs –20 devices –List price $8,995 USD –F5-ADD-EM-20 Max device license –List price $36,000 USD –F5-ADD-EM-MAX

19 BIG-IP Access Policy Manager

20 20 Authentication Alternatives Today Directory Web Servers App 1 App 2 App n 1. Code in the App Users Code in the App Costly, difficult to change Not repeatable Decentralized Less secure

21 21 Authentication Alternatives Today WAM Policy ManagerWAM Directory Web Servers App 1 App 2 App n 2. Agents on Servers Users Code in the App Agents on Servers Difficult to administer Interoperability Decentralized Less Secure WAM = Web Access Management

22 22 Authentication Alternatives Today WAM Proxy WAM Policy ManagerWAM Directory Web Servers App 1 App 2 App n Users 3. Specialized Access Proxies Code in the App Agents on Servers Specialized Access Proxies Don’t scale as well Often inferior reliability Big CAPEX & OPEX WAM = Web Access Management

23 23 A Better Alternative: BIG-IP APM and WAM WAM Policy ManagerWAM Directory Web Servers App 1 App 2 App n Replace Proxy with BIG- IP Access Policy Manager (APM) Gain superior scalability and high availability Benefit from F5’s Unified Application Delivery Services Users WAM Proxy BIG-IP LTM APM BIG-IP LTM APM LTM = Local Traffic Manager

24 24 Richer Application Delivery WAM Policy ManagerWAM Directory Web Servers App 1 App 2 App n BIG-IP LTM APM BIG-IP LTM APM Users Endpoint Security Checks Virtualization (HA, LB for Directories) Virtualization Additional BIG-IP benefits ASM or WA + Endpoint inspection Scaling and high availability for the Application and OAM Directory Web application security Web application acceleration Enterprise Class Architecture LTM = Local Traffic Manager ASM = Application Security Manager WA= WebAccelerator

25 25 1.User Request to F5 Virtual Server 2.F5 Queries WAM for Auth Scheme 3.WAM Responds with Auth Scheme 4.F5 Prompts for User Credentials 5.User Responds with Credentials 6.F5 Proxies the Credentials to WAM 7.WAM Replies with Authorization 8.F5 Responds to User with SSO Cookie Step by Step BIG-IP Local Traffic Manager Access Policy Manager Oracle Access Manager Oracle App Servers Client UserID Password 145 23678

26 26 Access Policy Design

27 27 Advanced authentication and access control Web based applications with Dynamic ACL Control www.example.com (LTM for public http traffic) news.example.com (LTM + APM for access control) 2 2 3 3 1 1 HTTP traffic for visitors/guests, access profile manages access HTTPS traffic for subscribers, access profile provides login page and authentication HTTP traffic for public with no access control

28 28 Customized User Interface Updated End-User Interface with Full Customization –Stylesheet (CSS) based customization eliminates the need to customize each page individually –Form location (left, center, right) –Font style/sizes –Header and footer

29 29 Easy Access Policy Deployment Wizards Deployment-specific wizards for Web Access Management for LTM virtuals, Network Access, and Web Applications Access Step-by-step configuration, context sensitive help, review and summary Creates base set of objects and access policy for common deployments Automatically branches to necessary configuration (e.g., DNS)

30 30 Reporting and Statistics Native BIG-IP TM Stats and RRD integration Dashboard integration for real-time monitoring New Reports section covering active and expired user sessions Easy navigation/view of user session variables

31 31 Dashboard Executive Summary

32 32 Secure Environment: Authenticating ActiveSync Devices Reduce authentication infrastructure and sync with Exchange One location for name space URL Scale and support growing mobile user base Secure environment BIG-IP ® LTM + APM Data Center MS Exchange DMZ Auth. Gateway

33 33 Only ADC with Geolocation Access Rules VPE – Geolocation Rules iRules not required Custom session variables Custom notification messages Logging Client locations Reporting

34 34 Authentication Sources RADIUS server LDAP server Microsoft Active Directory HTTP authentication RSA SecurID over RADIUS RSA Native SecurID HTTP authentication Oracle Access Manager Tivoli Access Manager

35 35 Modules on LTM BIG-IP APM: Web access management –Dynamic per-session L4 – L7 ACLs at speeds up to 12 Gbps –Up to 600 logins-per-second –Supports up to 60,000 users 16003600390069006900 FIPS 8900/ 8950 11050 LTM + APM XXXXXX LTM + APM + ASM XXXXX LTM + APM + WA XXXXX LTM + ACA XXXXXXX

36 F5 Management Plug-In ™ for VMware ® vSphere ™ Overview

37 37 Overview Free Software Plug-In for VMware vSphere Attaches to vCenter Server – modifies vSphere Client GUI Operates with both physical and virtual LTM editions Streamlines the administrative steps of adding VM nodes from load balancing pools Automates actions based on pre-defined policies Reduces risk of error Reduces manual effort Officially supported by F5 (in it’s unmodified state)

38 38 vSphere Client GUI

39 39 Plug-In Home Screen

40 40 Architecture vSphere Client BIG-IP Mgmt Console BIG-IP Local Traffic Manager Plug-In Linux vCenter Server

41 41 Synchronization between vSphere and BIG-IP The plug-in synchronizes vCenter and BIG-IP LTM BIG-IP LTM traffic management policies are applied to pools of VMs VMs are automatically associated with pools by the plug-in. VM pools are defined using flexible filters including: –Regular Expressions of VM names –IP netmask or range –Other custom VM attributes Changes from new VMs are reflected in LTM automatically (provisioned, enabled) Benefits –Fewer configuration steps –VM changes are automatically accounted for in LTM –Reduced complexity –Reduced risk of error

42 42 VM Filter Matching Options

43 43 Streamlining Routine Admin Tasks Right-click on a VM for additional options 1.Gracefully shutting down a VM 2.Putting a VM into maintenance mode Prevents unexpected user disruption of active sessions. Steps: 1.Stop sending new connections to a specific VM. 2.When # existing connections drops below X threshold then shutdown/suspend, or when timeout threshold is reached then shutdown/suspend. 3.If/When the VM comes back online, reassign it to the correct traffic management pool.

44 44 Right-Click VM: Graceful VM Shutdown Option

45 45 Right-Click VM: Disable VM via BIG-IP

46 46 Automated vs. Manual Approval Option 1: Automatically apply changes in VM status to BIG-IP (e.g. if you see any VM with the term “IIS” in the name, apply it to the web server pool). Option 2: Queue changes for later manual review and approval.

47 47 Automatic vs. Manual Approval

48 48 Approving Pending Changes

49 49 Plug-In Log Reports

50 50 Technical Details Runs on VMware vMA (Virtual Management Assistant) –Linux pre-loaded with VMware APIs Code is open source, released on DevCentral Written in Perl –Framework easy to extend and enhance Partners and customers are encouraged to collaborate and extend the plug-in F5 provides formal technical support for unmodified installations F5 Professional Services are available for custom enhancements

51 51 Plug-In Configuration Wizard

52 52 Assigning a Pool

53 53 Defining Shutdown Thresholds

54 54 Links to More Information Deployment Guide –http://www.f5.com/pdf/deployment-guides/f5-management-plug-in-vsphere-dg.pdfhttp://www.f5.com/pdf/deployment-guides/f5-management-plug-in-vsphere-dg.pdf Solution Overview –http://www.f5.com/pdf/solution-center/f5-vmware-vsphere.pdfhttp://www.f5.com/pdf/solution-center/f5-vmware-vsphere.pdf Download –http://devcentral.f5.com/Community/GroupDetails/tabid/1082223/asg/2002/Default.aspxhttp://devcentral.f5.com/Community/GroupDetails/tabid/1082223/asg/2002/Default.aspx

55 F5 Management Pack for SCOM

56 56 iControl – Open SOAP/XML API Opportunities to monitor and manage your F5 investment iControl for BIG-IP Management capabilities Full monitoring Community driven support through DevCentral PowerShell enabled

57 57 System Center Integration F5 PRO enabled Management Pack for Virtual Machine Manager Network monitoring Instruct BIG-IP PRO enabled Reports Host level performance

58 58 Key Features –Discovery of F5 devices  100% of objects –250+ health metrics available –Thresholds and Alerts, iRule-triggered Events –Node enable and disable –Failover, Configuration sync, Maintenance Mode –Live Migration, Quick Migration –Intelligent Server Capacity-based Load Balancing –Historical Data, Flexible Reporting System Center Integration (cont.) PRO enabled Management Pack for Operations Manager & VMM

59 59 F5 Management Pack on DevCentralDevCentral Core Pack PRO Pack Migration Pack Application Designers IIS SharePoint

60 60 Network diagram for BIG-IP Discovered F5 BIG-IP LTM and all network objects

61 61 Visualize network issues 2 web servers go down Traffic continues to flow Connection count on remaining web server increases sharply

62 62 Drill into infrastructure errors View error details for the entire pool

63 63 View the impact of the issue

64 64 BIG-IP App Configuration Alert View Shows all network configuration changes that need to be made for functioning App Task View Automatically configures BIG-IP (create Virtual Server, set profile, add Pool Members)

65 65 F5 Generated Distributed App (IIS)

66 66 F5-authored mapping scripts discover the application node (IIS) and configure SCOM to properly identify, diagram, connect to SCOM objects for monitoring and management tasks. Application VMs are auto-configured using BIG-IP application templates and support Maintenance mode from within SCOM console.

67 67 Dynamic Datacenter Toolkit (DDTK) On demand provisioning of Virtual Machines & BIG-IP Web based provisioning toolkit As customers or Business Units leverage DDTK to request virtual resources, the toolkit will provision and configure the BIG-IP as well. PowerShell driven

68 68 Long Distance VMotion Detailed Review

69 69 Escaping Boundaries Between DCs New Use Cases for Well Established Functionality Migration Disaster avoidance Capacity expansion Migration Disaster avoidance Capacity expansion Key Technical Problems Solved: Performance problems caused by latency or bandwidth Network retransmission of client traffic from site 1 to site 2 Loss of app sessions when migrating to another location

70 70 How it works – the fundamental steps 1.Storage VMotion to Site 2 2.VMotion to Site 2 3.LTM routes incoming connections for existing sessions to Site 2 VM 4.GTM routes new connections to Site 2 5.Register host and VM in vCenter Site 2 (optional)

71 71 Logical representation, not physical vCenter Server Internet EtherIP Tunnel

72 72 Acceleration & Encryption F5 testing results of common bandwidth/latency combinations iSessions™ or WAN Optimization Module™ SSL encryption Acceleration: TCP Optimization, Deduplication, Compression Able to successfully VMotion in conditions where previously failed Bandwidth (Mbps) Link Latency (RTT ms) Link Packet Loss (%) Average Time without WOM in Minutes Average Time with WOM in Minutes Acceleration Factor 45 (T3)1000%13:433:353.8X 100250%6:101:184.7X 155 (OC3)1000%13:253:293.9X 622 (OC12)400%5:571:573.1X 1000 (Ethernet) 200%2:380:383.5X

73 73 BIG-IP Local Traffic Manager Initial Environment BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B

74 74 BIG-IP Local Traffic Manager Step 1: F5 BIG-IP Local Traffic Manager Opens WAN Optimization Tunnel BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B 1 1 Compressed De-Duplicated Encrypted

75 75 BIG-IP Local Traffic Manager Step 2: Storage VMotion Executed Across WAN Optimized Tunnel BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B 2 2 This step can be avoided if storage is already being synchronously replicated between sites

76 76 BIG-IP Local Traffic Manager Step 2: Pending App VMotion, transactions rely on VM in Site A, but Storage in Site B BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B vCenter A still managing VM

77 77 BIG-IP Local Traffic Manager Step 3: Application VMotion Executed Over WAN Optimized Tunnel BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B 3 3

78 78 BIG-IP Local Traffic Manager Step 4: GTM health checks register the move, and Cut Over to Site-B BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B 4 4

79 79 BIG-IP Local Traffic Manager F5 BIG-IP Global Traffic Manager Routes All NEW Application Connections/Sessions Directly to Site B. BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B

80 80 BIG-IP Local Traffic Manager F5 BIG-IP Local Traffic Manager in Site A retransmits incoming connections for EXISTING Sessions to Site B Until Clients Register DNS Change BIG-IP Global Traffic Manager BIG-IP Local Traffic Manager vCenter A vCenter B

81 81 BIG-IP Local Traffic Manager Eventually, ALL Connections Go Directly to Site B. The Process Can Be Reversed When Necessary. BIG-IP Global Traffic Manager vCenter B BIG-IP Local Traffic Manager vCenter A Successful Application Migration Complete

82 82 Option: Have Original IP Space (Site A) Reclaimed and Re-Used for Other Applications BIG-IP Local Traffic Manager vCenter A

83 83 WAN Optimization Module™ Acceleration of VMotion and Storage VMotion F5 tested many different bandwidth/latency combinations Base Scenario: 1 GB Virtual Machine Windows & Linux Servers Source host CPU 100% utilized 10 individual test runs averaged for each scenario First Pass only (deduplication) SSL encryption Acceleration –TCP Optimization –Byte-level deduplication –Dynamic compression

84 84 Requirements Duplicates in Primary & Secondary sites: –F5 BIG-IP Local Traffic Manager –F5 BIG-IP Global Traffic Manager –F5 BIG-IP WAN Optimization Module –LTM iRule collecting any TCP connections that arrive to primary site after VM has been migrated, and forwards these connections to the secondary site. –vSphere, VMotion, Storage VMotion –Shared storage mounted via iSCSI or NFS that both ESX servers can mount TCP Ports 8000 (VMotion) & 443 (LTM) must be open Guest IP & Network config (e.g. port groups) on hosts in migration must be identical For VMotion, VMware officially supports 622Mbps or higher WAN (type of WAN is irrelevant)

85 85 Online Follow-Up Resources: Long Distance VMotion Solution Overall F5/VMware Solution Guide –http://www.f5.com/pdf/solution-center/f5-for-virtualized-it-environments.pdfhttp://www.f5.com/pdf/solution-center/f5-for-virtualized-it-environments.pdf Online Demo –http://devcentral.f5.com/weblogs/nojan/archive/2010/02/02/introducing-long-distance-vmotion-with- vmware.aspxhttp://devcentral.f5.com/weblogs/nojan/archive/2010/02/02/introducing-long-distance-vmotion-with- vmware.aspx Deployment Guide –http://www.f5.com/pdf/deployment-guides/vmware-vmotion-dg.pdfhttp://www.f5.com/pdf/deployment-guides/vmware-vmotion-dg.pdf Whitepaper –http://www.f5.com/pdf/white-papers/cloud-vmotion-f5-wp.pdfhttp://www.f5.com/pdf/white-papers/cloud-vmotion-f5-wp.pdf

86 86 Discussion Topic How are you monitoring your F5 infrastructure? –SNMP –Syslog –High Speed Logging via iRules Get more out of your monitoring by reviewing the alerts available from alertd in /etc/alertd Send emails from alertd Askf5.com Solution: SOL3667: Configuring alerts to send email notifications

87 87 Syslog Error Messages

88 88 Grep for the error

89 BIG-IP LTM HA Groups

90 90 What is the new HA-group feature? Allows creation of an ‘HA-group’ object shared between units –Any pools and/or trunk can be associated with the HA-group tmsh includes ‘clusters’ which is for future use (tied to potential multi-cluster support in chassis systems) –A score is assigned to the HA-group based on the health of its pools and trunks –The active unit is the one with the highest HA-group score Only one HA-group configurable in v10.1 HA-group cannot be used in active-active mode

91 91 What is the new HA-group feature? Phase 2 (later release) will allow multiple HA-groups –Will allow services (Virtuals) to also be associated with an HA-group –HA Groups become comparable to VRRP VRIDs –Will provide ‘smart’ active-active mode with fail over for a group of Virtuals depending on the score of its HA-group compared to the peer HA-group on the peer BIG-IP why fail over the whole unit when only one pool or trunk affecting just a few Virtuals has a problem?

92 92 HA-group relative health HA-group score depends on a weighting method to determine failover state –Give a weight to each pool/trunk to influence its importance in the overall HA-group score Previously the mechanisms were binary – if anything is ‘broken’ then fail over: Standby unit goes active if it loses sight of its peer –Failover cable down –No Network Failover packets being received Active unit voluntarily fails over if HA table says to do so –Gateway or VLAN failsafe fires –Daemon heartbeat fails This system level failover is required and remains unchanged –HA-Group is an additional entry to the HA table

93 93 Setting up weighting ‘attribute’ auto assigned to ‘percent-up-members’ –Later releases may define other attributes –Along with ‘weight’ this defines the contribution to the HA-group score by this pool/trunk –Example: assign a pool with weight 20 and a trunk with weight 32 The pool has 4 members defined and monitors have marked 2 members up and 2 down Pool contribution: 50% [percent-up members] of 20 [weight] = 10 The trunk has 8 members defined of which 2 are up Trunk contribution: 25% of 32 = 8 HA-group score for this unit: 10 + 8 = 18 Optional ‘threshold’ parameter below which the pool/trunk will not contribute –In the example, if there were a threshold of 3 or above for the pool, the HA-group score would be 8 (ignore pool contribution)

94 94 Setting up weighting Bonus parameter that applies only to active –‘active-bonus’ is added to the pool/trunk contributions but only when the unit is active –Stops flapping when small changes occur Loss of a pool member may be a regular occurrence in a large data center

95 95 tmsh configuration Example of creating the ha-group in tmsh –create / sys ha-group myHAgroup pools add { mypool { attribute percent-up- members threshold 2 weight 20 }} ‘attribute’ can be omitted since it is assigned by default

96 96 TMUI configuration

97 97 TMSH status

98 98 HA-group details HA-group stored in base configuration –not config sync’d –In 10.1, it is not necessary for the HA-group names to match between redundant peers –However, this is recommended since this will probably be required when upgrading to a release supporting multiple HA groups The HA-group has an associated entry in the HA table HA-group scores are exchanged over new fields in the network failover packets (port 1026) –HA-groups are dependent on network failover having been configured

99 99 Sub-second failover Various changes made to reduce the time to detect failure –For example, bcm56xxd software link status poll time reduced to 200ms Optimized around the HA-group feature –Expect sub second failover only when using HA-group with trunks configured (since pool member monitors first need to time out) Inband monitor is a possible exception –Detection of active unit general failure via network failover still dependent on the BigDB key Failover.NetTimeoutSec (default = 3 seconds) Although trunk score changes quickly, unless the unit is capable of sending network failover packets, the peer can only await network failover timeout Non-chassis systems can use serial failover for faster device failure detection (provided that network failover is not configured)

100 10 0 Best Practices Use the same HA Group name on both systems Typically add all trunks and pools –For large configurations, a limited sample of pools should be used –Choose weights, percentages and active bonus carefully using ‘what if’ scenarios to predict resulting state Consider creating trunks even for single interface connections –This will allow HA-group triggered failover upon interface down –If using VLAN Failsafe, leave this enabled since this monitors at packet level and protects against connectivity issues beyond the local switch link.

101 10 1 Other HA changes Interface failsafe –now testing not only if TMM instances are alive but if they each have access to the switch fabric

102 10 2 Summary HA Group provides for failover state based on relative health of selected pools and trunks Sub-second failover is now possible but only reliably when monitoring trunks HA Group enhances BIG-IP’s powerful HA feature set and will further evolve in future releases


Download ppt "1 F5 Users Group September 15 th 2010 Agenda Intro & welcome to the Phoenix F5 User’s Group Overview of Enterprise Manager and APM Open discussion (as."

Similar presentations


Ads by Google