Presentation is loading. Please wait.

Presentation is loading. Please wait.

BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility Name, Title, Company.

Similar presentations


Presentation on theme: "BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility Name, Title, Company."— Presentation transcript:

1 BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility
Name, Title, Company

2 A couple of things to set the stage…
EMC and VMware – seeing lots of confusion out there re: Disaster Recovery (DR) and Disaster Avoidance (DA) Like the blog post series – will break this session into multiple parts: PART I – Understanding DR and DA PART II – Understanding Stretched vSphere Clusters PART III – What’s New? PART IV – Where are areas where we are working for the future? Will work hard to cover a lot, but leave time for QnA

3 Understanding DR and DA
PART I… Understanding DR and DA

4 “Disaster” Avoidance – Host Level
This is vMotion. Most important characteristics: By definition, avoidance, not recovery. “non-disruptive” is massively different than “almost non-disruptive” X “Hey… That host WILL need to go down for maintenance. Let’s vMotion to avoid a disaster and outage.”

5 “Disaster” Recovery – Host Level
This is VM HA. Most important characteristics: By definition recovery (restart), not avoidance Simplicity, automation, sequencing X Hey… That host WENT down due to unplanned failure causing a unplanned outage due to that disaster. Let’s automate the RESTART of the affected VMs on another host.

6 Disaster Avoidance – Site Level
This is inter-site vMotion. Most important characteristics: By definition, avoidance, not recovery. “non-disruptive” is massively different than “almost non-disruptive” X Hey… That site WILL need to go down for maintenance. Let’s vMotion to avoid a disaster and outage.

7 Disaster Recovery – Site Level
This is Disaster Recovery. Most important characteristics: By definition recovery (restart), not avoidance Simplicity, testing, split brain behavior, automation, sequencing, IP address changes X Hey… That site WENT down due to unplanned failure causing a unplanned outage due to that disaster. Let’s automate the RESTART of the affected VMs on another host.

8 Type 1: “Stretched Single vSphere Cluster”
vSphere Cluster A vCenter Distance Distributed Logical Datastore Information in Site A Information in Site B

9 One little note re: “Intra-Cluster” vMotion
Intra-cluster vMotions can be highly parallelized and more and more with each passing vSphere release With vSphere 4.1 and vSphere 5 it’s up to 4 per host/128 per datastore if using 1GbE 8 per host/128 per datastore if using 10GbE …and that’s before you tweak settings for more, and shoot yourself in the foot :-) Need to meet the vMotion network requirements 622Mbps or more, 5ms RTT (upped to 10ms RTT if using Metro vMotion - vSphere 5 Enterprise Plus) Layer 2 equivalence for vmkernel (support requirement) Layer 2 equivalence for VM network traffic (required)

10 Type 2: “Multiple vSphere Clusters”
vSphere Cluster A vSphere Cluster B vCenter Distance Distributed Logical Datastore Information in Site A Information in Site B

11 One little note re: “Inter-Cluster” vMotion
Inter-Cluster vMotions are serialized Involves additional calls into vCenter, so hard limit Lose VM cluster properties (HA restart priority, DRS settings, etc.) Need to meet the vMotion network requirements 622Mbps or more, 5ms RTT (upped to 10ms RTT if using Metro vMotion w vSphere 5 Enterprise Plus) Layer 2 equivalence for vmkernel (support requirement) Layer 2 equivalence for VM network traffic (required)

12 Type 3: “Classic Site Recovery Manager”
vCenter Prot. vSphere Cluster A vSphere Cluster B vCenter Recov. Distance Array-based (sync, async or continuous) replication or vSphere Replication v1.0 (async) Datastore A Read-only (gets promoted or snapshoted to become writeable) replica of Datastore A

13 Part I - Summary People have a hard time with this… Disaster Avoidance != Disaster Recovery Same logic applies at a server level applies at the site level Same value (non-disruptive for avoidance, automation/simplicity for recovery) that applies at a server level, applies at the site level Stretched clusters have many complex considerations SRM and non-disruptive workload mobility are mutually exclusive right now vMotion = single vCenter domain vs. SRM = two or more vCenter domains Note – people use SRM for workload mobility all the time (and is improved in vSphere 5/SRM 5) – but this is always disruptive SRM remains the simplest, cleanest solution across many use cases.

14 vSphere Stretched Clusters Considerations
PART II… vSphere Stretched Clusters Considerations

15 Stretched Cluster Design Considerations
Understand the difference compared to DR HA does not follow a recovery plan workflow HA is not site aware for applications, where are all the moving parts of my app? Same site or dispersed? How will I know what needs to be recovered? Single stretch site = single vCenter During disaster, what about vCenter setting consistency across sites? (DRS Affinity, cluster settings, network) Will network support? Layer2 stretch? IP mobility? Cluster split brain = big concern, how to handle? Not necessarily cheaper solution, read between the lines (there is usually hidden storage, networking and WAN costs)

16 Stretched Storage Configuration
Literally just stretching the SAN fabric (or NFS exports over LAN) between locations Requires synchronous replication Limited in distance to ~100km in most cases Typically read/write in one location, read-only in second location Implementations with only a single storage controller at each location create other considerations.

17 Stretched Storage Configuration
X X Stretched Storage Fabric(s) Read/Write Read-Only

18 Distributed Virtual Storage Configuration
Leverages new storage technologies to distribute storage across multiple sites Requires synchronous mirroring Limited in distance to ~100km in most cases Read/write storage in both locations, employs data locality algorithms Typically uses multiple controllers in a scale-out fashion Must address “split brain” scenarios

19 Distributed Virtual Storage Configuration
X X Read/Write Read/Write

20 EMC VPLEX Overview EMC VPLEX falls into the distributed virtual storage category Keeps data synchronized between two locations but provides read/write storage simultaneously at both locations Uses scale-out architecture with multiple engines in a cluster and two clusters in a Metro-Plex Supports both EMC and non-EMC arrays behind the VPLEX

21 VPLEX – What A Metro-Plex looks like

22 Preferred Site in VPLEX Metro
VPLEX Metro provides read/write storage in two locations at the same time (AccessAnywhere) In a failure scenario, VPLEX uses “detach rules” to prevent split brain A preferred site is defined on a per-distributed virtual volume (not site wide) basis Preferred site remains read/write; I/O halted at non-preferred site Invoked only by entire cluster failure, entire site failure, or cluster partition Read/ write Read/ write I/O Halted (VMware PDL response) Distributed Virtual Volume X IP/FC links for Metro-Plex Preferred Site Non-Preferred Site

23 Configuring Preferred Site…

24 Something to understand re: yanking & “suspending” storage…
What happens when you “yank” storage? VMs who’s storage “disappears” or goes “read-only” behave indeterminately Responding to a ping doesn’t mean a system is available (if it doesn’t respond to any services, for example) There’s no chance of “split brain” data But – VMs can stay alive for surprisingly long Conversely, sometimes, VMs blue-screen quickly Yanked: Suspended:

25 Stretched Cluster Considerations #1
Consideration: Without read/write storage at both sites, roughly half the VMs incur a storage performance penalty With stretched Storage Network configurations: VMs running in one site are accessing storage in another site Creates additional latency for every I/O operation With distributed virtual storage configurations: Read/write storage provided, so this doesn’t apply

26 Stretched Cluster Considerations #2
Consideration: Prior to and including vSphere 4.1, you can’t control HA/DRS behavior for “sidedness” With stretched Storage Network configurations: Additional latency introduced when VM storage resides in other location Storage vMotion required to remove this latency With distributed virtual storage configurations: Need to keep cluster behaviors in mind Data is access locally due to data locality algorithms

27 Stretched Cluster Considerations #3
Consideration: With vSphere 5, you can use DRS host affinity rules to control HA/DRS behavior With all storage configurations: Doesn’t address HA primary/secondary node selection (see What’s New, vSphere 5) With stretched Storage Network configurations: Beware of single-controller implementations Storage latency still present in the event of a controller failure With distributed virtual storage configurations: Plan for cluster failure/cluster partition behaviors Note – not supported in vSphere 4.1, and until new support work, will have caveats…

28 Stretched Cluster Considerations #4
Consideration: There is no supported way to control VMware HA primary /secondary node selection with vSphere 4.x With all storage configurations: Limits cluster size to 8 hosts (4 in each site) No supported mechanism for controlling/specifying primary/secondary node selection Methods for increasing the number of primary nodes also not supported by VMware Note: highly recommended reading (just ignore non-supported notes) : bricks.com/vmware-high-availability-deepdive/ vSphere 5 VM HA implementation changes things…

29 Stretched Cluster Considerations #5
Consideration: Stretched HA/DRS clusters (and inter- cluster vMotion also) require Layer 2 “equivalence” at the network layer With all storage configurations: Complicates the network infrastructure Involves technologies like OTV, VPLS/Layer 2 VPNs With stretched Storage Network configurations: Can’t leverage vMotion at distance without storage latency With distributed virtual storage configurations: Data locality enables vMotion at distance without latency Note how the SRM automated IP change is much simpler in many cases

30 Stretched Cluster Considerations #6
Consideration: The network lacks site awareness, so stretched clusters introduce new networking challenges. With all storage configurations: The movement of VMs from one site to another doesn’t update the network VM movement causes “horseshoe routing” (LISP, a future networking standard, helps address this) You’ll need to use multiple isolation addresses in your VMware HA configuration Note how the SRM automated IP change is much simpler in many cases

31 Summary – and recommendations
Don’t let storage vendors (me included) do the Jedi mind trick on you. In the paraphrased words of Yoda… “think not of the sexy demo, think of operations during the disaster – there is no try” Solution Type Description For Disaster Avoidance, you… For Disaster Recovery, you… Pros Cons Type 1: “Stretched Single vSphere Cluster” Single cluster, storage actively accessible in both places. vMotion between sites Try to use VM HA, though likely use scripting in practice. Killer in a demo Works very well in a set of failure cases VM granularity Places funky cluster and VM HA restrictions More complex Type 2: “Multiple vSphere Clusters” Multiple clusters, storage actively accessible in both places Scripting DA and DR in broad use cases No VM HA restrictions There’s no escaping scripting for DR Type 3: “Classic Site Recovery Manager” 2 sites in a protected/recovery relationship (can be bidirectional, and can be N:1) Disruptive. For a VM, deregister, register, fix SRM Or, use SRM to do en-masse - still disruptively Site Recovery Manager Best RPO/RTO across the broadest set of use cases Simple, robust, DR testing and failover Plan granularity Mobility between sites is disruptive.

32 PART III… What’s new….

33 So – what’s new? NOW – Site Recovery Manager 5.0
NOW – vSphere 5 VM HA rewrite & heartbeat datastores, help on partition scenarios NOW – vSphere 5 Metro vMotion NOW – Improved VPLEX partition behavior – will mark the target as “dead”, works better with vSphere NOW – VPLEX cluster interconnect and 3rd party witness

34 SRM 5.0 New Features New Workflows – inc Failback!!!
Planned migration – with replication update vSphere Replication framework Redesigned UI – true single pane of glass configuration Faster IP customization SRM specific Shadow VM icons at recovery site In guest scripts callout via recovery plans VM dependency ordering during configurable …..and a LOT more….

35 SRM 5.0 – Automated Failback
Reprotect VMs from Site B to Site A Reverse Replication Apply reverse resource map Automate failover Site B to Site A Reverse original recovery plan Simplify failback process Automate replication management Eliminate need to set up new recovery plan and cleanup Restrictions Does not apply if Site A physically lost Not available at GA with vSphere Replication Reverse original recovery plan Site A Site B vSphere vSphere Reverse Replication

36 SRM 5.0 – vSphere Replication
Adding native replication to SRM VMs can be replicated regardless of the underlying storage Enables replication between heterogeneous datastores Replication is managed as a property of a virtual machine Efficient replication minimizes impact on VM workloads Considerations: Scale, Failback, Consistency Groups source target

37 vSphere 5.0 - HA Complete re-write of vSphere HA
Elimination of Primary/Secondary concept Foundation for increased scale and functionality Eliminates common issues (DNS resolution) Multiple Communication Paths Can leverage storage as well as the mgmt network for communications Enhances the ability to detect certain types of failures and provides redundancy IPv6 Support Enhanced User Interface Enhanced Deployment FDM ESX 01 ESX 03 ESX 02 FDM FDM ESX 04

38 vSphere 5.0 HA – Heartbeat Datastores
Monitor availability of Slave hosts and VMs running on them Determine host network isolated VS network partitioned Coordinate with other Masters – VM can only be owned by one master By default, vCenter will automatically pick 2 datastores Very useful for hardening stretched storage models FDM ESX 01 ESX 03 ESX 02 FDM FDM ESX 04

39 Metro vMotion – Stretched Clusters
Enable vMotion across longer distances Workload balancing between sites Less latency sensitive Work underway on support Work underway on building upon 4.1 vSphere DRS host affinity groups Site A Site B We have been improving VMotion’s ability to deal with high latency in the vSphere 5 release. Now vMotion can support up to 10ms of latency (up from vSphere 4.1 required max of 5ms latency). There is “built-in”, i.e., no on/off button for this. ESX detects the link’s latency capabilities between source and destination to ensure the vMotion will be successful. In addition, we are introducing an HCL for stretch storage technologies such as NetApp Metrocluster and EMC’s vPlex. (In the past, we did one-off support for vPlex and other stretch clusters but for very specific configurations - KB article) Coupled with our 4.1 features such as host affinity, customers are now able to build stretched clusters and move workloads across longer distances while being fully supported by VMware. Stretched cluster = usually one vCenter instance (or could be across two separate clusters) Storage requirements: Works with any stretch cluster validated configuration vSphere 4.0/4.1 – Made improvements to the speed of vMotion

40 GeoSynchrony 5.0 for VPLEX
What’s new with VPLEX 5.0 NEW NEW GeoSynchrony 5.0 for VPLEX Expanded 3rd party storage support VP-Copy for EMC arrays Expanded array qualifications (ALUA) VPLEX Witness Host cross-cluster connected VPLEX Element Manager API VPLEX Geo EMC announced several new capabilities for VPLEX in 2011: VS2 is new VPLEX hardware that is more powerful and more efficient. We’ll discuss this in more detail in a few minutes. VPLEX Geo EMC is also introducing the latest release of the VPLEX operating system, GeoSynchrony 5.0, which runs on existing VS1 hardware as well as the new VS2. Other new GeoSynchrony 5.0 capabilities include: The ability to move or copy virtually provisioned devices with VPLEX. When a virtually provisioned or thin volume is on the array, VPLEX used to convert that volume into a thick device and then move it, allowing the user to do a space reclaim after the device was copied over to the target array. Now, VPLEX can move a virtually provisioned volume to another array nondisruptively and still preserve its virtually provisioned nature, without any changes to the device. This is a key benefit for companies looking to optimize their storage capacity with virtual provisioning, as it allows users to only consume storage as it is needed. VPLEX supports VP-Copy for all EMC arrays with virtual provisioning in this release. Several new array qualifications, including support for ALUA-based arrays. This enables VPLEX to support the full range of midtier arrays, including ALUA-based arrays from NetApp and HP. A major new innovation for VPLEX Metro and VPLEX Geo deployments is the new VPLEX Witness. VPLEX Witness provides a new level of high availability by providing a witness that can run in a virtual machine and arbitrate between two VPLEX clusters if there is a failure, thus keeping applications online for true automated high availability that doesn’t require human intervention if a failure occurs. Host cross-cluster connect support that enables applications to withstand a complete failure within a given site. VPLEX Element Manager API, which is a REST-based programming interface that allows you to integrate your operational workflows with VPLEX. It is currently VPLEX-specific and allows for VPLEX information to be used externally.

41 CIO, Northern Hospital of Surry County
VS2: New VPLEX Hardware NEW Faster Intel multi-core processors Faster engine interconnect interfaces Space-efficient engine form factor Third-party rack support VS2, the new VPLEX hardware, will be available for all three products: VPLEX Local, VPLEX Metro, and VPLEX Geo. It contains several new advancements. First, EMC has taken advantage of faster Intel multi-core processors for increased compute power. Also added is a new PCI Express Gen 2 card to drive more I/Os per second and throughput than previous hardware generations. EMC has also reduced the size of the engines, from 4U to 2U, while increasing the performance. The new 2U engine has eight front-end and eight back-end ports, which is not only more efficient, but in combination with the new PCI card, it allows you to maximize the engine’s complete profile and drive more bandwidth and I/Os per second than previous generations. EMC has also added a new, faster 10 GbE WAN interface for communications between clusters, even when deployed over distance with VPLEX Metro and VPLEX Geo. For greater customer flexibility in deploying VPLEX, with this new hardware, EMC will now support deploying VPLEX clusters in both our own standard EMC racks as well as custom or third-party racks. This gives you more choice when planning your data centers, and can enable VPLEX clusters to run in an even smaller footprint than previous generations. And as a further enhancement to VPLEX’s high availability story, the upgrade process from previous generations of VPLEX is also completely nondisruptive. Also note that regardless of which hardware your are using for VPLEX, all existing and new GeoSynchrony capabilities are available on all VPLEX hardware platforms. Migrated an entire datacenter – saving $500,000 in revenue VPLEX paid for itself twice-over in a single event. As a hospital they did not have to interrupt healthcare. “I'm sure glad we made the DR investment. It took a lot of pressure off us. We ran the DR virtual farm over 50 hours. This is solid stuff. VPLEX is well worth the investment by the way.” CIO, Northern Hospital of Surry County

42 VPLEX Witness Use with VPLEX Metro and VPLEX Geo
NEW VPLEX WITNESS Use with VPLEX Metro and VPLEX Geo Coordinates seamless failover Runs as a virtual machine within an ESX host Connects to VPLEX through IP Now let’s look in more detail at the new VPLEX Witness, available at no extra charge with GeoSynchrony 5.0. The Witness enhances high availability for both VPLEX Metro and VPLEX Geo. The VPLEX Witness is a virtual machine that is deployed on a customer-provided ESX server, which resides in a failure domain that is separate from either of the two VPLEX clusters. The VPLEX Witness has IP-based connectivity with both clusters in either a VPLEX Metro or a VPLEX Geo, and it maintains knowledge about each VPLEX cluster's view of inter-cluster connectivity. In other words, the Witness is always monitoring the connectivity and validating the communication between two VPLEX clusters, and each VPLEX cluster is also communicating its health and status to the Witness as an independent third-party safeguard. By reconciling its own observations with the information reported by the clusters, the VPLEX Witness can arbitrate between inter-cluster network partition failures and cluster failures, should they occur. Think of the Witness as being able to monitor the “heartbeat” between clusters. The Witness gives VPLEX the capability for applications to withstand any storage failures, including disaster-level scenarios affecting entire racks of storage equipment simultaneously. The Witness provides automatic restart in the event of any server failures, and all failures are handled automatically with no human intervention or complicated failover processes. The Witness can distinguish between a site or link failure, and enables hosts, servers, and applications to fail over in step with VPLEX, while still keeping applications online. For VPLEX Metro deployments, the Witness allows offers a high-availability solution with zero RPO. With VPLEX Geo, due to the 50 milliseconds of latency between clusters, the Witness offers near-zero RPO. For all VPLEX deployments over distance though, the Witness provides a unique and robust enhancement for resiliency, high availability, and seamless failover. LUN A ACCESSANYWHERE LUN A Integrates with hosts, clusters, applications to automate failover and recovery

43 VPLEX Family Use Cases AVAILABILITY High availability
Eliminate storage operations from failover ACCESS ANYWHERE COLLABORATION Instant and simultaneous data access over distance Streamline workflow ACCESS ANYWHERE MOBILITY Disaster avoidance Data center migration Workload rebalancing ACCESS ANYWHERE Cluster A Cluster B VPLEX also enables additional use cases under each category described earlier—mobility, availability, and collaboration. For mobility, you can also take advantage of VPLEX’s disaster avoidance, data center migration, and workload rebalancing capabilities. For availability, VPLEX allows for a high availability infrastructure as well as for eliminating failover from storage operations. For collaboration, VPLEX enables instant and simultaneous data access over distance and streamlined workflows. Move and relocate VMs, applications, and data over distance Maintain availability and non-stop access by mirroring across locations Enable concurrent read/write access to data across locations

44 VPLEX Family Product Matrix
Local Metro Geo Mobility Within a data center Synchronous: approximately 100 km VPLEX offers mobility, availability, and collaboration within and between data centers at synchronous and asynchronous distances. This matrix will help you find the VPLEX family product that is right for you. Asynchronous: approximately 1,000 km Availability High availability VPLEX Witness support Cross-cluster connected configuration Collaboration Between two sites

45 For More Information… Using VPLEX Metro with VMware HA: vMotion over Distance Support with VPLEX Metro: VPLEX Metro HA techbook (also available at the EMC booth) ng_Technical/Technical_Documentation/h7113- vplex-architecture-deployment-techbook.pdf VPLEX Metro with VMware HA ng_Technical/White_Paper/h8218-vplex-metro- vmware-ha-wp.pdf

46 PART IV… What we’re working on….

47 The coolness is accelerating….
Ongoing SRM and VM HA enhancements Hardening the Metro use case and enhancing support models: This includes a much more robust test harness – result of a lot of joint work. Result of a lot of demand for stretched cluster models. Improving stretched cluster + SRM coexistence Too many things to cover today….quick look at just a few of them….

48 VM Component Protection
Detect and recover from catastrophic infrastructure failures affecting a VM Loss of storage path Loss of Network link connectivity VMware HA restarts VM on available healthy host With VM Component Protection, we’ll detect and recover from total storage path failures. This is an extension of VMware HA’s protection against server failure. Bad hosts would be marked and quarantined, allowing admins to fix hardware issues on their own time and so that DRS will not place VMs on unhealthy hosts. The remediation is either a restart of the VM (in the case of VMware HA) or a failover to the secondary VM (in the case of Fault Tolerance) We will only ever restart or failover the VM when we have a healthy host to move the VM to. We’ll detect failures of storage paths for the VM for all virtual disks that a VM depends on (e.g. its system drive, data drive, etc.). We won’t necessarily react to a single path failure if multi-pathing is used, but we will react if all storage paths fail. We intend to cover networking and so should a vswitch, port group, or vDS become unavailable due to physical NIC or link failures, the VM (or set of VMs affected) will be failed over to a new host that has connectivity. This new feature would be “automatic” and available for every VM VMware ESX VMware ESX

49 Automated Stretched Cluster Config
Leverage the work in VASA and VM Granular Storage (VSP3205) Automated site protection for all VM’s Benefits of single cluster model Automated setup of HA and DRS affinity rules Today we support HA for stretch clusters but its very difficult to manage, configure and maintain. we continue to evolve our distance strategy and enable more automated setup and on-going configuration of stretched clusters Example: when you setup a stretch cluster, the storage is primary on one side. In a future release we now automatically setup the DRS affinity rules so that when a VM is provisioned, the VM and its associated VMDK are aligned on the SAME primary site. Finally, to avoid split brain, we plan to support and integrate with 3rd party witnesses (storage arrays) to ensure failover is accurate and completely automated. Site A Site B HA/DRS Cluster Distributed Storage Volumes Layer 2 Network

50 Increased Topology Support
SRM (future) Site A Site B Metro Distance Storage Clusters, Sync MetroHA (today) SRM (future) In a future release SRM will support a wide range of stretched storage use cases, with full integration of 3rd party storage technologies as they become available. Various topology options will be available: Metro Active – Active with 3rd site SRM protection Metro Active – Passive with integrated SRM protection Async Geo distances with SRM protection Site C Site A Site B Geo Distance Storage Clusters, Async SRM (future)

51 Q & A – Part 1 – Questions from us to you.
“I think a stretched cluster is what we need… How do I know?” “I think a DR solution is what we need... How do I know?” “Stretched clustering sounds like awesomesauce, why not?” “Our storage vendor/team tells us their disaster avoidance solution will do everything we want, HA, DA, DR, we are not experts here, should we be wary?” “Our corporate SLA’s for recovery are simple BUT we have LOTS of expertise and think we can handle the bleeding edge stuff should we just go for it???”

52 Q & A – Part 2 – Questions from us to you.
“Can we have our cake and eat it yet? We want BOTH solutions together?” “Is there anything the storage vendors are NOT telling us that might make running this day to day costly from an opex point of view?” “Why does one solution use a single vCenter yet the other uses two?? the DR solution seems less flexible and more complex to manage, is that fair?” “My datacenter server rooms are 50 ft apart but i definitely want a DR solution what's wrong with that idea?”

53 Q & A – Part 3 – We would love to hear…
Looking to async distances…. Is “cold migration” over distance good enough for you, or is it live or nothing? Would you pay for it (easiest litmus test of “nice to have”) Would you be willing to be very heterogenous to use it? What are your thoughts on networking solutions (are you looking at OTV type stuff?)

54 Want to Learn More? EMC Sessions
Tuesday 10:00 - SUP1006 – Accelerate the Journey to Your Cloud 12:00 - VSP1425 – Ask the Expert vBloggers 14:30 - SPO3977 – Next-Generation Storage and Backup for Your Cloud 16:00 - BCA1931 – Design, Deploy, and Optimize SharePoint 2010 on vSphere 16:30 - BCO2479 – Understanding VMware vSphere Stretched Clusters, Disaster Recovery, and Workload Mobility 17:30 - VSP1926 – Getting Started in vSphere Design Wednesday 9:30 - BCA2320 – VMware vSphere 5: Best Practices for Oracle RAC Virtualization 9:30 - CIM2343 – Building a Real-Life High-Performance Financial Services Cloud 14:30 - LAS4014 – EMC Partner Hands-on Lab Design, Development, and Implementation 16:00 - SPO3976 – Real-World Big Data Use Cases Made Possible by Isilon OneFS Storage and VMware vSphere Thursday 12:30 - BCA1931 – Design, Deploy, and Optimize SharePoint 2010 on vSphere Want to play and try to break VNX, VNXe, VMAX, VPLEX, Isilon, Avamar VMs (plus many more)? Check out the VMware Partner HoL and the EMC Interactive Labs! Solutions Exchange Join the Hunt! After VMworld EMC Booth (#1101) Demos, Meetups, Customer Presentations, Cloud Architects Interactive Demos (#1001) Join the vSpecialist Scavenger Hunt! Follow Twitter hashtag #vhunt for your chance to win! Keep the conversation going in the “Everything VMware at EMC” online community: emc.com/vmwarecommunity

55


Download ppt "BCO2479 Understanding vSphere Stretched Clusters, Disaster Recovery, and Planned Workload Mobility Name, Title, Company."

Similar presentations


Ads by Google