Presentation on theme: "BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility Name, Title, Company."— Presentation transcript:
1BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility Name, Title, Company
2A couple of things to set the stage… EMC and VMware – seeing lots of confusion out there re: Disaster Recovery (DR) and Disaster Avoidance (DA)Like the blog post series – will break this session into multiple parts:PART I – Understanding DR and DAPART II – Understanding Stretched vSphere ClustersPART III – What’s New?PART IV – Where are areas where we are working for the future?Will work hard to cover a lot, but leave time for QnA
3Understanding DR and DA PART I…Understanding DR and DA
4“Disaster” Avoidance – Host Level This is vMotion.Most important characteristics:By definition, avoidance, not recovery.“non-disruptive” is massively different than “almost non-disruptive”X“Hey… That host WILL need to go down for maintenance. Let’s vMotion to avoid a disaster and outage.”
5“Disaster” Recovery – Host Level This is VM HA.Most important characteristics:By definition recovery (restart), not avoidanceSimplicity, automation, sequencingXHey… That host WENT down due to unplanned failure causing a unplanned outage due to that disaster. Let’s automate the RESTART of the affected VMs on another host.
6Disaster Avoidance – Site Level This is inter-site vMotion.Most important characteristics:By definition, avoidance, not recovery.“non-disruptive” is massively different than “almost non-disruptive”XHey… That site WILL need to go down for maintenance. Let’s vMotion to avoid a disaster and outage.
7Disaster Recovery – Site Level This is Disaster Recovery.Most important characteristics:By definition recovery (restart), not avoidanceSimplicity, testing, split brain behavior, automation, sequencing, IP address changesXHey… That site WENT down due to unplanned failure causing a unplanned outage due to that disaster. Let’s automate the RESTART of the affected VMs on another host.
8Type 1: “Stretched Single vSphere Cluster” vSphere Cluster AvCenterDistanceDistributed Logical DatastoreInformation in Site AInformation in Site B
9One little note re: “Intra-Cluster” vMotion Intra-cluster vMotions can be highly parallelizedand more and more with each passing vSphere releaseWith vSphere 4.1 and vSphere 5 it’s up to 4 per host/128 per datastore if using 1GbE8 per host/128 per datastore if using 10GbE…and that’s before you tweak settings for more, and shoot yourself in the foot :-)Need to meet the vMotion network requirements622Mbps or more, 5ms RTT (upped to 10ms RTT if using Metro vMotion - vSphere 5 Enterprise Plus)Layer 2 equivalence for vmkernel (support requirement)Layer 2 equivalence for VM network traffic (required)
10Type 2: “Multiple vSphere Clusters” vSphere Cluster AvSphere Cluster BvCenterDistanceDistributed Logical DatastoreInformation in Site AInformation in Site B
11One little note re: “Inter-Cluster” vMotion Inter-Cluster vMotions are serializedInvolves additional calls into vCenter, so hard limitLose VM cluster properties (HA restart priority, DRS settings, etc.)Need to meet the vMotion network requirements622Mbps or more, 5ms RTT (upped to 10ms RTT if using Metro vMotion w vSphere 5 Enterprise Plus)Layer 2 equivalence for vmkernel (support requirement)Layer 2 equivalence for VM network traffic (required)
12Type 3: “Classic Site Recovery Manager” vCenterProt.vSphere Cluster AvSphere Cluster BvCenterRecov.DistanceArray-based (sync, async or continuous) replication or vSphere Replication v1.0 (async)Datastore ARead-only (gets promoted or snapshoted to become writeable) replica of Datastore A
13Part I - SummaryPeople have a hard time with this… Disaster Avoidance != Disaster RecoverySame logic applies at a server level applies at the site levelSame value (non-disruptive for avoidance, automation/simplicity for recovery) that applies at a server level, applies at the site levelStretched clusters have many complex considerationsSRM and non-disruptive workload mobility are mutually exclusive right nowvMotion = single vCenter domain vs. SRM = two or more vCenter domainsNote – people use SRM for workload mobility all the time (and is improved in vSphere 5/SRM 5) – but this is always disruptiveSRM remains the simplest, cleanest solution across many use cases.
14vSphere Stretched Clusters Considerations PART II…vSphere Stretched Clusters Considerations
15Stretched Cluster Design Considerations Understand the difference compared to DRHA does not follow a recovery plan workflowHA is not site aware for applications, where are all the moving parts of my app? Same site or dispersed? How will I know what needs to be recovered?Single stretch site = single vCenterDuring disaster, what about vCenter setting consistency across sites? (DRS Affinity, cluster settings, network)Will network support? Layer2 stretch? IP mobility?Cluster split brain = big concern, how to handle?Not necessarily cheaper solution, read between the lines (there is usually hidden storage, networking and WAN costs)
16Stretched Storage Configuration Literally just stretching the SAN fabric (or NFS exports over LAN) between locationsRequires synchronous replicationLimited in distance to ~100km in most casesTypically read/write in one location, read-only in second locationImplementations with only a single storage controller at each location create other considerations.
18Distributed Virtual Storage Configuration Leverages new storage technologies to distribute storage across multiple sitesRequires synchronous mirroringLimited in distance to ~100km in most casesRead/write storage in both locations, employs data locality algorithmsTypically uses multiple controllers in a scale-out fashionMust address “split brain” scenarios
20EMC VPLEX OverviewEMC VPLEX falls into the distributed virtual storage categoryKeeps data synchronized between two locations but provides read/write storage simultaneously at both locationsUses scale-out architecture with multiple engines in a cluster and two clusters in a Metro-PlexSupports both EMC and non-EMC arrays behind the VPLEX
22Preferred Site in VPLEX Metro VPLEX Metro provides read/write storage in two locations at the same time (AccessAnywhere)In a failure scenario, VPLEX uses “detach rules” to prevent split brainA preferred site is defined on a per-distributed virtual volume (not site wide) basisPreferred site remains read/write; I/O halted at non-preferred siteInvoked only by entire cluster failure, entire site failure, or cluster partitionRead/writeRead/writeI/O Halted(VMware PDLresponse)Distributed Virtual VolumeXIP/FC linksfor Metro-PlexPreferred SiteNon-Preferred Site
24Something to understand re: yanking & “suspending” storage… What happens when you “yank” storage?VMs who’s storage “disappears” or goes “read-only” behave indeterminatelyResponding to a ping doesn’t mean a system is available (if it doesn’t respond to any services, for example)There’s no chance of “split brain” dataBut – VMs can stay alive for surprisingly longConversely, sometimes, VMs blue-screen quicklyYanked:Suspended:
25Stretched Cluster Considerations #1 Consideration: Without read/write storage at both sites, roughly half the VMs incur a storage performance penaltyWith stretched Storage Network configurations:VMs running in one site are accessing storage in another siteCreates additional latency for every I/O operationWith distributed virtual storage configurations:Read/write storage provided, so this doesn’t apply
26Stretched Cluster Considerations #2 Consideration: Prior to and including vSphere 4.1, you can’t control HA/DRS behavior for “sidedness”With stretched Storage Network configurations:Additional latency introduced when VM storage resides in other locationStorage vMotion required to remove this latencyWith distributed virtual storage configurations:Need to keep cluster behaviors in mindData is access locally due to data locality algorithms
27Stretched Cluster Considerations #3 Consideration: With vSphere 5, you can use DRS host affinity rules to control HA/DRS behaviorWith all storage configurations:Doesn’t address HA primary/secondary node selection (see What’s New, vSphere 5)With stretched Storage Network configurations:Beware of single-controller implementationsStorage latency still present in the event of a controller failureWith distributed virtual storage configurations:Plan for cluster failure/cluster partition behaviorsNote – not supported in vSphere 4.1, and until new support work, will have caveats…
28Stretched Cluster Considerations #4 Consideration: There is no supported way to control VMware HA primary /secondary node selection with vSphere 4.xWith all storage configurations:Limits cluster size to 8 hosts (4 in each site)No supported mechanism for controlling/specifying primary/secondary node selectionMethods for increasing the number of primary nodes also not supported by VMwareNote: highly recommended reading (just ignore non-supported notes) : bricks.com/vmware-high-availability-deepdive/vSphere 5 VM HA implementation changes things…
29Stretched Cluster Considerations #5 Consideration: Stretched HA/DRS clusters (and inter- cluster vMotion also) require Layer 2 “equivalence” at the network layerWith all storage configurations:Complicates the network infrastructureInvolves technologies like OTV, VPLS/Layer 2 VPNsWith stretched Storage Network configurations:Can’t leverage vMotion at distance without storage latencyWith distributed virtual storage configurations:Data locality enables vMotion at distance without latencyNote how the SRM automated IP change is much simpler in many cases
30Stretched Cluster Considerations #6 Consideration: The network lacks site awareness, so stretched clusters introduce new networking challenges.With all storage configurations:The movement of VMs from one site to another doesn’t update the networkVM movement causes “horseshoe routing” (LISP, a future networking standard, helps address this)You’ll need to use multiple isolation addresses in your VMware HA configurationNote how the SRM automated IP change is much simpler in many cases
31Summary – and recommendations Don’t let storage vendors (me included) do the Jedi mind trick on you.In the paraphrased words of Yoda… “think not of the sexy demo, think of operations during the disaster – there is no try”Solution TypeDescriptionFor Disaster Avoidance, you…For Disaster Recovery, you…ProsConsType 1: “Stretched Single vSphere Cluster”Single cluster, storage actively accessible in both places.vMotion between sitesTry to use VM HA, though likely use scripting in practice.Killer in a demoWorks very well in a set of failure casesVM granularityPlaces funky cluster and VM HA restrictionsMore complexType 2: “Multiple vSphere Clusters”Multiple clusters, storage actively accessible in both placesScriptingDA and DR in broad use casesNo VM HA restrictionsThere’s no escaping scripting for DRType 3: “Classic Site Recovery Manager”2 sites in a protected/recovery relationship (can be bidirectional, and can be N:1)Disruptive. For a VM, deregister, register, fix SRMOr, use SRM to do en-masse - still disruptivelySite Recovery ManagerBest RPO/RTO across the broadest set of use casesSimple, robust, DR testing and failoverPlan granularityMobility between sites is disruptive.
33So – what’s new? NOW – Site Recovery Manager 5.0 NOW – vSphere 5 VM HA rewrite & heartbeat datastores, help on partition scenariosNOW – vSphere 5 Metro vMotionNOW – Improved VPLEX partition behavior – will mark the target as “dead”, works better with vSphereNOW – VPLEX cluster interconnect and 3rd party witness
34SRM 5.0 New Features New Workflows – inc Failback!!! Planned migration – with replication updatevSphere Replication frameworkRedesigned UI – true single pane of glass configurationFaster IP customizationSRM specific Shadow VM icons at recovery siteIn guest scripts callout via recovery plansVM dependency ordering during configurable…..and a LOT more….
35SRM 5.0 – Automated Failback Reprotect VMs from Site B to Site AReverse ReplicationApply reverse resource mapAutomate failover Site B to Site AReverse original recovery planSimplify failback processAutomate replication managementEliminate need to set up new recovery plan and cleanupRestrictionsDoes not apply if Site A physically lostNot available at GA with vSphere ReplicationReverse original recovery planSite ASite BvSpherevSphereReverse Replication
36SRM 5.0 – vSphere Replication Adding native replication to SRMVMs can be replicated regardless of the underlying storageEnables replication between heterogeneous datastoresReplication is managed as a property of a virtual machineEfficient replication minimizes impact on VM workloadsConsiderations: Scale, Failback, Consistency Groupssourcetarget
37vSphere 5.0 - HA Complete re-write of vSphere HA Elimination of Primary/Secondary conceptFoundation for increased scale and functionalityEliminates common issues (DNS resolution)Multiple Communication PathsCan leverage storage as well as the mgmt network for communicationsEnhances the ability to detect certain types of failures and provides redundancyIPv6 SupportEnhanced User InterfaceEnhanced DeploymentFDMESX 01ESX 03ESX 02FDMFDMESX 04
38vSphere 5.0 HA – Heartbeat Datastores Monitor availability of Slave hosts and VMs running on themDetermine host network isolated VS network partitionedCoordinate with other Masters – VM can only be owned by one masterBy default, vCenter will automatically pick 2 datastoresVery useful for hardening stretched storage modelsFDMESX 01ESX 03ESX 02FDMFDMESX 04
39Metro vMotion – Stretched Clusters Enable vMotion across longer distancesWorkload balancing between sitesLess latency sensitiveWork underway on supportWork underway on building upon 4.1 vSphere DRS host affinity groupsSite ASite BWe have been improving VMotion’s ability to deal with high latency in the vSphere 5 release.Now vMotion can support up to 10ms of latency (up from vSphere 4.1 required max of 5ms latency).There is “built-in”, i.e., no on/off button for this.ESX detects the link’s latency capabilities between source and destination to ensure the vMotion will be successful.In addition, we are introducing an HCL for stretch storage technologies such as NetApp Metrocluster and EMC’s vPlex. (In the past, we did one-off support for vPlex and other stretch clusters but for very specific configurations - KB article)Coupled with our 4.1 features such as host affinity, customers are now able to build stretched clusters and move workloads across longer distances while being fully supported by VMware.Stretched cluster = usually one vCenter instance (or could be across two separate clusters)Storage requirements:Works with any stretch cluster validated configurationvSphere 4.0/4.1 –Made improvements to the speed of vMotion
40GeoSynchrony 5.0 for VPLEX What’s new with VPLEX 5.0NEWNEWGeoSynchrony 5.0 for VPLEXExpanded 3rd party storage supportVP-Copy for EMC arraysExpanded array qualifications (ALUA)VPLEX WitnessHost cross-cluster connectedVPLEX Element Manager APIVPLEX GeoEMC announced several new capabilities for VPLEX in 2011:VS2 is new VPLEX hardware that is more powerful and more efficient. We’ll discuss this in more detail in a few minutes.VPLEX GeoEMC is also introducing the latest release of the VPLEX operating system, GeoSynchrony 5.0, which runs on existing VS1 hardware as well as the new VS2. Other new GeoSynchrony 5.0 capabilities include:The ability to move or copy virtually provisioned devices with VPLEX. When a virtually provisioned or thin volume is on the array, VPLEX used to convert that volume into a thick device and then move it, allowing the user to do a space reclaim after the device was copied over to the target array. Now, VPLEX can move a virtually provisioned volume to another array nondisruptively and still preserve its virtually provisioned nature, without any changes to the device. This is a key benefit for companies looking to optimize their storage capacity with virtual provisioning, as it allows users to only consume storage as it is needed. VPLEX supports VP-Copy for all EMC arrays with virtual provisioning in this release.Several new array qualifications, including support for ALUA-based arrays. This enables VPLEX to support the full range of midtier arrays, including ALUA-based arrays from NetApp and HP.A major new innovation for VPLEX Metro and VPLEX Geo deployments is the new VPLEX Witness. VPLEX Witness provides a new level of high availability by providing a witness that can run in a virtual machine and arbitrate between two VPLEX clusters if there is a failure, thus keeping applications online for true automated high availability that doesn’t require human intervention if a failure occurs.Host cross-cluster connect support that enables applications to withstand a complete failure within a given site.VPLEX Element Manager API, which is a REST-based programming interface that allows you to integrate your operational workflows with VPLEX. It is currently VPLEX-specific and allows for VPLEX information to be used externally.
41CIO, Northern Hospital of Surry County VS2: New VPLEX HardwareNEWFaster Intel multi-core processorsFaster engine interconnect interfacesSpace-efficient engine form factorThird-party rack supportVS2, the new VPLEX hardware, will be available for all three products: VPLEX Local, VPLEX Metro, and VPLEX Geo. It contains several new advancements.First, EMC has taken advantage of faster Intel multi-core processors for increased compute power. Also added is a new PCI Express Gen 2 card to drive more I/Os per second and throughput than previous hardware generations.EMC has also reduced the size of the engines, from 4U to 2U, while increasing the performance. The new 2U engine has eight front-end and eight back-end ports, which is not only more efficient, but in combination with the new PCI card, it allows you to maximize the engine’s complete profile and drive more bandwidth and I/Os per second than previous generations. EMC has also added a new, faster 10 GbE WAN interface for communications between clusters, even when deployed over distance with VPLEX Metro and VPLEX Geo.For greater customer flexibility in deploying VPLEX, with this new hardware, EMC will now support deploying VPLEX clusters in both our own standard EMC racks as well as custom or third-party racks. This gives you more choice when planning your data centers, and can enable VPLEX clusters to run in an even smaller footprint than previous generations.And as a further enhancement to VPLEX’s high availability story, the upgrade process from previous generations of VPLEX is also completely nondisruptive.Also note that regardless of which hardware your are using for VPLEX, all existing and new GeoSynchrony capabilities are available on all VPLEX hardware platforms.Migrated an entire datacenter – saving $500,000 in revenue VPLEX paid for itself twice-over in a single event. As a hospital they did not have to interrupt healthcare.“I'm sure glad we made the DR investment. It took a lot of pressure off us. We ran the DR virtual farm over 50 hours. This is solid stuff. VPLEX is well worth the investment by the way.”CIO, Northern Hospital of Surry County
42VPLEX Witness Use with VPLEX Metro and VPLEX Geo NEWVPLEXWITNESSUse with VPLEX Metro and VPLEX GeoCoordinates seamless failoverRuns as a virtual machine within an ESX hostConnects to VPLEX through IPNow let’s look in more detail at the new VPLEX Witness, available at no extra charge with GeoSynchrony 5.0. The Witness enhances high availability for both VPLEX Metro and VPLEX Geo.The VPLEX Witness is a virtual machine that is deployed on a customer-provided ESX server, which resides in a failure domain that is separate from either of the two VPLEX clusters. The VPLEX Witness has IP-based connectivity with both clusters in either a VPLEX Metro or a VPLEX Geo, and it maintains knowledge about each VPLEX cluster's view of inter-cluster connectivity. In other words, the Witness is always monitoring the connectivity and validating the communication between two VPLEX clusters, and each VPLEX cluster is also communicating its health and status to the Witness as an independent third-party safeguard.By reconciling its own observations with the information reported by the clusters, the VPLEX Witness can arbitrate between inter-cluster network partition failures and cluster failures, should they occur. Think of the Witness as being able to monitor the “heartbeat” between clusters.The Witness gives VPLEX the capability for applications to withstand any storage failures, including disaster-level scenarios affecting entire racks of storage equipment simultaneously. The Witness provides automatic restart in the event of any server failures, and all failures are handled automatically with no human intervention or complicated failover processes. The Witness can distinguish between a site or link failure, and enables hosts, servers, and applications to fail over in step with VPLEX, while still keeping applications online.For VPLEX Metro deployments, the Witness allows offers a high-availability solution with zero RPO. With VPLEX Geo, due to the 50 milliseconds of latency between clusters, the Witness offers near-zero RPO. For all VPLEX deployments over distance though, the Witness provides a unique and robust enhancement for resiliency, high availability, and seamless failover.LUN AACCESSANYWHERELUN AIntegrates with hosts, clusters, applications to automate failover and recovery
43VPLEX Family Use Cases AVAILABILITY High availability Eliminate storage operations from failoverACCESS ANYWHERECOLLABORATIONInstant and simultaneous data access over distanceStreamline workflowACCESS ANYWHEREMOBILITYDisaster avoidanceData center migrationWorkload rebalancingACCESS ANYWHERECluster ACluster BVPLEX also enables additional use cases under each category described earlier—mobility, availability, and collaboration.For mobility, you can also take advantage of VPLEX’s disaster avoidance, data center migration, and workload rebalancing capabilities.For availability, VPLEX allows for a high availability infrastructure as well as for eliminating failover from storage operations.For collaboration, VPLEX enables instant and simultaneous data access over distance and streamlined workflows.Move and relocate VMs, applications, and data over distanceMaintain availability and non-stop access by mirroring across locationsEnable concurrent read/write access to data across locations
44VPLEX Family Product Matrix LocalMetroGeoMobilityWithin a data centerSynchronous: approximately 100 kmVPLEX offers mobility, availability, and collaboration within and between data centers at synchronous and asynchronous distances. This matrix will help you find the VPLEX family product that is right for you.Asynchronous: approximately 1,000 kmAvailabilityHigh availabilityVPLEX Witness supportCross-cluster connected configurationCollaboration Between two sites
45For More Information…Using VPLEX Metro with VMware HA:vMotion over Distance Support with VPLEX Metro:VPLEX Metro HA techbook (also available at the EMC booth) ng_Technical/Technical_Documentation/h7113- vplex-architecture-deployment-techbook.pdfVPLEX Metro with VMware HA ng_Technical/White_Paper/h8218-vplex-metro- vmware-ha-wp.pdf
47The coolness is accelerating…. Ongoing SRM and VM HA enhancementsHardening the Metro use case and enhancing support models:This includes a much more robust test harness – result of a lot of joint work.Result of a lot of demand for stretched cluster models.Improving stretched cluster + SRM coexistenceToo many things to cover today….quick look at just a few of them….
48VM Component Protection Detect and recover from catastrophic infrastructure failures affecting a VMLoss of storage pathLoss of Network link connectivityVMware HA restarts VM on available healthy hostWith VM Component Protection, we’ll detect and recover from total storage path failures.This is an extension of VMware HA’s protection against server failure.Bad hosts would be marked and quarantined, allowing admins to fix hardware issues on their own time and so that DRS will not place VMs on unhealthy hosts.The remediation is either a restart of the VM (in the case of VMware HA) or a failover to the secondary VM (in the case of Fault Tolerance)We will only ever restart or failover the VM when we have a healthy host to move the VM to.We’ll detect failures of storage paths for the VM for all virtual disks that a VM depends on (e.g. its system drive, data drive, etc.).We won’t necessarily react to a single path failure if multi-pathing is used, but we will react if all storage paths fail.We intend to cover networking and so should a vswitch, port group, or vDS become unavailable due to physical NIC or link failures, the VM (or set of VMs affected) will be failed over to a new host that has connectivity.This new feature would be “automatic” and available for every VMVMware ESXVMware ESX
49Automated Stretched Cluster Config Leverage the work in VASA and VM Granular Storage (VSP3205)Automated site protection for all VM’sBenefits of single cluster modelAutomated setup of HA and DRS affinity rulesToday we support HA for stretch clusters but its very difficult to manage, configure and maintain.we continue to evolve our distance strategy and enable more automated setup and on-going configuration of stretched clustersExample: when you setup a stretch cluster, the storage is primary on one side.In a future release we now automatically setup the DRS affinity rules so that when a VM is provisioned, the VM and its associated VMDK are aligned on the SAME primary site.Finally, to avoid split brain, we plan to support and integrate with 3rd party witnesses (storage arrays) to ensure failover is accurate and completely automated.Site ASite BHA/DRS ClusterDistributed Storage VolumesLayer 2 Network
50Increased Topology Support SRM (future)Site ASite BMetro Distance Storage Clusters, SyncMetroHA (today)SRM (future)In a future release SRM will support a wide range of stretched storage use cases, with full integration of 3rd party storage technologies as they become available.Various topology options will be available:Metro Active – Active with 3rd site SRM protectionMetro Active – Passive with integrated SRM protectionAsync Geo distances with SRM protectionSite CSite ASite BGeo Distance Storage Clusters, AsyncSRM (future)
51Q & A – Part 1 – Questions from us to you. “I think a stretched cluster is what we need… How do I know?”“I think a DR solution is what we need... How do I know?”“Stretched clustering sounds like awesomesauce, why not?”“Our storage vendor/team tells us their disaster avoidance solution will do everything we want, HA, DA, DR, we are not experts here, should we be wary?”“Our corporate SLA’s for recovery are simple BUT we have LOTS of expertise and think we can handle the bleeding edge stuff should we just go for it???”
52Q & A – Part 2 – Questions from us to you. “Can we have our cake and eat it yet? We want BOTH solutions together?”“Is there anything the storage vendors are NOT telling us that might make running this day to day costly from an opex point of view?”“Why does one solution use a single vCenter yet the other uses two?? the DR solution seems less flexible and more complex to manage, is that fair?”“My datacenter server rooms are 50 ft apart but i definitely want a DR solution what's wrong with that idea?”
53Q & A – Part 3 – We would love to hear… Looking to async distances….Is “cold migration” over distance good enough for you, or is it live or nothing?Would you pay for it (easiest litmus test of “nice to have”)Would you be willing to be very heterogenous to use it?What are your thoughts on networking solutions (are you looking at OTV type stuff?)
54Want to Learn More? EMC Sessions Tuesday10:00 - SUP1006 – Accelerate the Journey to Your Cloud12:00 - VSP1425 – Ask the Expert vBloggers14:30 - SPO3977 – Next-Generation Storage and Backup for Your Cloud16:00 - BCA1931 – Design, Deploy, and Optimize SharePoint 2010 on vSphere16:30 - BCO2479 – Understanding VMware vSphere Stretched Clusters, Disaster Recovery, and Workload Mobility17:30 - VSP1926 – Getting Started in vSphere DesignWednesday9:30 - BCA2320 – VMware vSphere 5: Best Practices for Oracle RAC Virtualization9:30 - CIM2343 – Building a Real-Life High-Performance Financial Services Cloud14:30 - LAS4014 – EMC Partner Hands-on Lab Design, Development, and Implementation16:00 - SPO3976 – Real-World Big Data Use Cases Made Possible by Isilon OneFS Storage and VMware vSphereThursday12:30 - BCA1931 – Design, Deploy, and Optimize SharePoint 2010 on vSphereWant to play and try to break VNX, VNXe, VMAX, VPLEX, Isilon, Avamar VMs (plus many more)? Check out the VMware Partner HoL and the EMC Interactive Labs!Solutions ExchangeJoin the Hunt!After VMworldEMC Booth (#1101)Demos, Meetups, Customer Presentations, Cloud ArchitectsInteractive Demos (#1001)Join the vSpecialist Scavenger Hunt! Follow Twitter hashtag #vhunt for your chance to win!Keep the conversation going in the “Everything VMware at EMC” online community: emc.com/vmwarecommunity