Presentation is loading. Please wait.

Presentation is loading. Please wait.

John welby, chief strategist

Similar presentations


Presentation on theme: "John welby, chief strategist"— Presentation transcript:

1 John welby, chief strategist
Next-Gen Data Center – Software Defined John welby, chief strategist

2 Table of Contents Introduction – What is SDN/SDDC? Section 1 – SDDC Use-Cases Section 2 – Why VMware NSX? Section 3 – What is Cisco ACI? Section 4 – Why Cisco ACI? Section 5 – VMware SDDC Applications Section 6 – SDDC Design Questions Section 7 – Building An SDDC – Architectural Overview Section 8 – ServiceMaster’s Data Center of the Future Section 9 – Cisco ACI v. VMware NSX Design Comparison Section 10 – Cisco ACI Solution and Components Section 11 – The Next “Big Thing” in Data Centers Section 12 – ACI Connectivity to Outside Networks

3 What is SDN/SDDC?

4 Software Defined Networking
Software-defined networking (SDN): a new approach to designing, building, and managing networks that separates the network’s control (brains) and forwarding (muscle) planes to better optimize each SDN is not OpenFlow - OpenFlow is often thought of as being synonymous with SDN, but it is only a single element in the overall SDN architecture. OpenFlow is an open standard for a communications protocol that enables the control plane to interact with the forwarding plane.

5 Software Defined Data Center
A software-defined data center (SDDC) uses virtualization technologies to separate hardware infrastructure into separate “virtual machines” so that a service provider can offer computing and network services for many different clients. By virtualizing a data center, all of the resources of the system – including computing, storage, and networking – can be “abstracted” and represented in a software form. Components of the SDDC include software-defined networking (SDN) elements, software-defined storage, and virtual machines. Many different software platforms can be used to virtualize computing resources, including Citrix, KVM, OpenDaylight, OpenStack, OpenFlow, Red Hat, and VMware among many others.

6 Software Defined Data Center - Benefits
Resources can be represented in virtualized or abstracted software, physical connections and pieces of hardware do not have to be physically manipulated to make changes. Using software to plan, provision, and manage services can automate or self-provision services for customers, speeding up service deployment and saving money on operational costs of expensive manual configurations.

7 DevOps via Amazon Web Services – Service Catalog
Create Account Specify Environment Let the Developing Begin <5 min to setup One of the main drivers to move to SDN/SDDC is to have a platform to enable IT to provide development platforms dynamically and quickly.

8 Use-Cases Compute-Resource Bursting (to any cloud provider/service)
Hybrid Cloud Active-Active Data Centers Implement NSX in SvM’s current DC architecture (non Spine-Leaf) Answer: Spine-Leaf is not required – NSX is an overlay, it does not care what the underlay is as long as IP connectivity is present. What is a “Next Generation Data Center?” Made up of devices that support APIs natively

9 Things to Think About… NSX v. Cisco ACI? {Kinda an artificial question} The real question: “what type of network do you want to run NSX on?” 40G fabric with 1.6uS latency? Then, Cisco ACI A key ACI benefit  manageability VMware’s NSX is not a network, it is an application. Can run NSX on top of ACI (implicit recommendation of NSX’s Design Guide) but you cannot run ACI on top of NSX.

10 Use-Cases / ACI Benefits
ACI allows collection and correlation of relational data from the ACI fabric for ALL systems connected to the fabric. Push device/user/application “meta data” into the CMDB (Remedy) The Cisco ACI Fabric is the central nervous system of the data center and provides very granular information regarding all applications & devices sending traffic through the Fabric. Atomic Clock Source attaches to a Spine every packet is time-stamped with information such as the switches the traffic traversed and much more The Cisco APIC can pull very detailed object and flow information from any part of the network and push to the Remedy CMDB (Atrium) via UCS Director (see next slide). VMware NSX can do the same but only with VMware's hypervisor and up to an Edge Gateway.

11 CMDB Population – E.g. VMware VM Memory Size Change
UCSD workflow creates a CMDB record (CI) and updates a CI when a user changes the memory allocation of the Virtual Machine. Recommendation: use a SOAP GUI tool to access the WSDL to generate the SOAP XML messages which are stored as structure in the Custom Task (Java script)

12 nsx?

13 What is NSX VMware NSX is the network virtualization platform for the Software-Defined Data Center. NSX delivers for networking the capabilities VMware already provides for compute and storage. Create, save, delete and restore virtual networks on demand, without reconfiguring your physical network. NSX brings security inside the data center with automated fine-grained policies tied to the virtual machines, while its network virtualization capabilities let you create entire networks in software. 1Source:VMware SE

14 Why NSX1 - Security Speed Agility
NSX software deploys in hours and security policies can be turned on in weeks NSX finely-tuned Micro-Segmentation w/ complex segmentation has been deployed in (4) weeks (including discovery) Agility Entire security policy can be changed in real-time with no dependency on IP schema. Can fallback in seconds. 1Source:VMware SE

15 Why NSX – Security1 Dynamic Security based on Active Directory for VDI
Dynamic security policy for VDI based on user’s A/D state. Micro-controls turned on when user logs in. Different groups of users can have unique rules applied. NSX easily enforces boundaries between VDI instances – Cisco Secure Enclaves cannot do this. Dynamic Automated Threat Response w/ Ecosystem Partners Ecosystem IPS/IDS or Agentless Anti-Virus Tagging System – differentiated firewall policies can be applied dynamically. NSX delivers on self-securing network with a firewall on every virtual machine. 1Source:VMware SE

16 Why NSX – Security1 NSX Simple Security v. Cisco Secure Complexity
NSX Micro-Segmentation – puts a stateful L2-L4 firewall on every VM to deliver zero trust. Cisco Macro-Segmentation – depends on zones of trust in the Enclaves and primitive access control lists. PCI Compliance – NSX Stateful L2-L4 v. Cisco Stateless L2-L4 Service Chaining between the VM and the vNIC NSX can service-chain McAfee-Intel, CheckPoint, Palo Alto Networks and others to deliver L2-L7 security on every packet coming out of every VM. NSX enables IPS/IDS deep packet inspection, Data Loss Prevention, and other advanced services. 1Source:VMware SE

17 Why NSX – Security1 NSX Simple Security v. Cisco
Security policy based on vCenter objects v. Cisco’s IP address-based insecure trust zones NSX can apply policies based on logical vCenter objects w/ no regard to IP address. CheckPoint can use vCenter Objects to apply policy. Palo Alto Networks uses NSX policy groups to apply policy. McAfee depends on NSX for logical security groupings. Cisco depends on IP addresses and very complex firewall rules. Cisco cannot accomplish the same level of granularity w/out using an unmanageable number of complex firewall rules. NSX Micro-Segmentation in 30 Days v. Cisco Macro-Segmentation in 6-12 months NSX, using Agile methodology, can deliver very quickly Cisco, using a Waterfall methodology, requires wholesale network changes and complex firewall rules to be implemented. vMotion NSX Security v. Cisco No vMotion w/ Security NSX applies security policies independent of location and IP address of the workload. Workloads can move to any server in the DC or across DCs with fully-enforced security policies. Cisco depends on IP address information and cannot achieve ubiquitous security enforcement (i.e. problems moving between DCs. 1Source:VMware SE

18 Why NSX – Use Existing Hardware / Risk-of-Change1
Management NSX provides visibility that Cisco cannot: Out of the NIC Into the NSX Firewall Out of the NSX Firewall Into the Ecosystem Partner Firewall Out of the Ecosystem Partner Firewall Into the Ecosystem Partner IPS/IDS Out of the Ecosystem Partner IPS/IDS Into the Virtual Distributed Switch (VDS) Out of the VDS Into the Virtual Router (VR) Out of the VR and onto the physical NIC 1Source:VMware SE

19 Why NSX – Disaster Recovery1
vMotion across Data Centers With NSX, ensuring security and extending logical networks between physical data center, DR is easy: Loads can be vMotion’d across country without changing physical firewall security policy. Security policies are always enforced which includes VMware's ecosystem partner integrations. Stateful L2-L7 No need to change IP addresses as VMs move NSX spans vCenter 1Source:VMware SE

20 Why NSX – Disaster Recovery1
NSX deployment solves the security challenges by allowing any workload to exist on any server in any DC NSX frees up between 10% - 30% of the available compute capacity by solving the security issue Investments in NSX have returned 2x – 10x savings in compute, storage and power/cooling. 1Source:VMware SE

21 cisco aci

22 What is Cisco ACI? Overview1 “A method of building and maintaining a networking fabric offering significant software control, automation and wire-speed switching on a very large scale” Historical, packet-level details on traffic A re-invention of Data Center networking using a model that is essentially L2 and L3 independent using the concept of tenants to compartmentalize traffic intrinsic to a logical grouping of applications and services, while using VXLAN outside of those groupings. VXLAN – allows use of the same IP subnets, VLANs and MAC addresses within different tenants without any conflicts Can be integrated with Cisco UCS to automate the entire data center UCS controls for VM and bare-metal server provisioning UCS Director’s integration with the Cisco ACI API for dynamic network fabric configuration 1InfoWorld Review: ”Cisco ACI shakes up SDN”

23 What is Cisco ACI? Nuts and Bolts “Another way of doing SDN” ACI diverges with the OpenFlow approach: ACI communicates with network devices using OpFlex (Cisco’s operations flow control protocol), not OpenFlow Critical distinction: OpFlex places the network configuration decisions in the network, NOT in the controller. The Cisco ACI controller (APIC) abstracts the higher-level configurations. The Cisco ACI fabric is responsible for implementing the controller’s instructions and reporting back on success/failure OpFlex has been proposed as an IETF standard and as a part of OpenDaylight OpFlex is built around a complete RESTful API which relies heavily on Python programming language

24 What is Cisco ACI? Nuts and Bolts ACI is designed as networking for data centers – very large data centers. Based on Cisco Nexus switching fabric (Nexus 9000) Spine/Leaf design with 10G, 40G, and 100G connectivity Logical tiering segmentation via the concept of End Point Groups (EPGs) EPGs are not servers or VMs, but essentially subnetworks that contain those resources Communication between EPGs must be explicitly configured (by default, no traffic allowed) through Contracts Can assign rules that apply to outside devices, such as firewalls and load balancers APICs handle all of this back-end configuration APICs sit outside of the data path and are not required for the fabric to function (but no changes can be made)

25 Cisco ACI Nuts and Bolts APICs serve up the fabric configuration, provide an admin web UI and host the RESTful API that ACI is built around. ACI configuration data and state data is stored in SQLite on the controllers and sharded across the controller cluster. ACI fabric routes flow traffic based on lookup tables maintained in the fabric itself Local endpoint lookup tables stored on the leaves Rest of fabric lookup tables stored on the spines NO ARP or broadcast flooding required – the fabric already knows the location of every endpoint (can allow ARP and broadcast flooding if required) Wire-speed switching Flowlet-based Load-Balancing

26 Cisco ACI ACI uses all paths
Flowlet-Based Load-Balancing ACI uses all paths ACI knows and tracks everything ACI breaks up a flow and can send packets from a single flow over multiple paths. More granular than ECMP Allows use of all links for packets of one flow (unlike ECMP)

27 Dispelling the AVS Myths
Cisco ACI Dispelling the AVS Myths Roadmap (Q1 2016) AVS is not going away and in the near future WILL BE supported by VMware (2016 timeframe) AVS uses the same API as the Nexus 1000V Next maintenance release: True micro-segmentation regardless of whether ACI implemented intra-EPG isolation (does not require AVS) attribute-based EPGs Common Policies uSegmentation example – block certain types of Web servers from communicating. E.g. Finance Web servers cannot communicate with Manufacturing Web servers even though in same EPG.

28 Cisco ACI Every element of ACI can be controlled via the RESTful API
Configuration and Management Every element of ACI can be controlled via the RESTful API Cisco ACI IS NOT an API bolted onto the supplied administration tools or running alongside the solution. The API IS the administration tool The CLI and Web UI use the API to perform every task The CLI uses a Python script that uses the API ACI is an extremely open architecture Cisco actively contributes and maintains code, in conjunction with customers and others interested in ACI Cisco’s GitHub repository contains commits from a number of developers who aren’t employed by Cisco

29 L4-7 Network Services Integration
Architecture allows easy integration of L4-7 services into data center fabric DC architectures evolving from north-south traffic to east-west ACI allows automatic traffic steering from one app tier to chain L4-7 service devices and connecting back to another app tier Automatic L4-7 device configuration

30 L4-7 Network Services Integration – Service Graphs
Contract Consumer & Provider EPG L4-7 Service Graph Managed v. Unmanaged Services Unmanaged Service – does not program device. Programs Leaf & Fabric-side network only during service graph instantiation. Managed Service – managed by APIC using Device Packages (changing in v2.x)

31 Cisco ACI Troubleshooting and Maintenance ACI provides several tools to assist in problem detection and resolution under the Operations section of the web UI Can select two endpoints and ACI will show you how they are connected across the fabric by identifying the leaves and spines that the packet traverses ACI can go back in time with low-level detail Switch Port Analyzer (SPAN) and Encapsulated Remote SPAN (ERSPAN) functions to direct all traffic between two endpoints or fabric objects to a port elsewhere on the fabric ACI configurations can be condensed into a single JSON or XML file for backup and uploaded to a server at regular intervals Individual tenant configurations can be backed up

32 Cisco ACI External Integrations
ACI manages the networking functions of virtualization platforms such as Hyper-V, Xen and VMware (but does not manage VMs or provision servers) To achieve a software-defined data center (SDDC), ACI can integrate with orchestration tools leveraging the RESTful API VMware’s vRealize Orchestrator and VMware’s vRealize Automation

33 Cisco ACI ACI can integrate with Cisco Unified Computing Systems (UCS)
Cisco-Cisco Integrations ACI can integrate with Cisco Unified Computing Systems (UCS) With UCS + ACI an entire Data Center can be automated UCS controls for VM and bare-metal server provisioning UCS Director’s integration with the ACI API allows dynamic network fabric configuration

34 Why cisco aci?

35 Legacy Data Center v. Cisco ACI
20,000’ Level Legacy Data Center Architecture – 3-Tier 10G Not optimized for East-West traffic A lot of cabling, cooling, real-estate, etc. Cisco’s answer – ACI Intelligence in the Data Center Spines implement BGP Route Reflector functionality (simplify routing – eliminates full-mesh architecture (similar in concept to OSPF’s DR/BDR construct) Modular growth  add leaves as more ports are needed “ACI is all VXLAN” Think of ACI as “hypervisor virtualization” Micro-Segmentation in ACI = EPG-based constructs

36 Cisco ACI Security Unified Layered Security Architecture – perimeter FWs, IPS/IDS, Zone-based FWs, VPN, Network Access Control (NAC), Data Loss Prevention (DLP), Encryption, Security Event Logging and Vulnerability Assessment. Security architecture must include a solution that is agnostic to workload type (bare metal, VMware, Hyper-V, KVM, OpenStack, mainframe, etc) Cisco ACI is, by default, a Distributed Stateless Firewall that DENIES any traffic that has not been permitted between devices. Cisco ACI micro-segmentation is achieved inside the hypervisor by replacing the VMware VDS or Logical Switch with the Cisco Application Virtual Switch (AVS) Cisco ACI has been verified by Verizon to meet or exceed PCI requirements

37 Cisco ACI Security The Cisco ACI solution simplifies both the scope of infrastructure handling cardholder data required to be in an audit, and the ease of passing a PCI audit by leveraging the Cisco ACI architecture. Verizon's Qualified Security Assessors (QSA) have validated within Cisco's labs that Cisco ACI can be used to reduce the scope for PCI and simplify the management of segmentation. See Verizon’s ACI and PCI Compliance Audit, Assessment and Attestation Report for more details. Symantec "Leader in SECURITY" chooses ACI for their SDN Solution – Leading software security Vendor chooses ACI 

38 Cisco ACI Security Cisco ACI allows customer choice with service chaining/insertion support leveraging a Partner Ecosystem of over 45 Partners for the following: Automation -  Orchestration, Enterprise Monitoring, Operations, PaaS, Cloud Orchestration & Management Security & Governance – Analytics, Enterprise Monitoring and Security, DLP, DPI, IPS, IDS, Malware, etc. Big Data & Analytics Cisco ACI allows anyone to use the OPEN API’s without requiring Cisco’s permission

39 Cisco ACI Security Cisco ACI provides scalable service insertion of L4-7 Services at 10G+ line rate performance for ALL workloads physical/virtual Cisco ACI can provide NFV via VMM Integration inside the hypervisor leveraging APIs with VMWare vSphere, Microsoft Hyper-V and KVM (in development) VMWare NSX does not provide 10G+ line rate performance for any of their service insertion since it is software driven

40 VMware SDDC applications

41 VMware Applications vRealize Automation + NSX-Network + vRealize Operations vRealize Automation Time is money IT agility Streamline implementation strategy Provision services in a fraction of the time Export/import Blueprints as code (improves version control) Service Blueprints can be edited as text files to facilitate DevOps deployments NSX-Network + vRealize Automation => On-Demand Application Delivery “IT as a strategic business partner to help deliver innovative services back to the business to help meet their goals”

42 VMware Applications

43 VMware Applications

44 Design questions

45 Design Considerations
NSX Physical Failure Impact on Applications/Workflows Logical Teaming Design v. Physical Teaming Design A Single vCenter deployment -or- Large-Scale, Multi-vCenter Deployment NSX Edge form factor sizing Can resize, recommended during a scheduled outage. Can do it in production but there would be a few packets dropped.

46 Design Considerations
In General Isolate Fault Domains – do not extend L2 (i.e. OTV) across Data Centers (Aryo) Layer 3 between Data Centers – MP-BGP (Aryo) OSPF converges must faster than BGP FedEx – may go with an ACI controller per 2,3,4,5 racks (fault isolation)

47 Design Considerations
Cisco ACI ACI forwards by MAC or IP address (host - /32) No flooding on the network (by default) Leaves convert broadcasts and multicasts to unicast to spine (per bridge domain) IP addresses learnt are appended with a /32 Greenfield ACI deployment, VLANs are not needed for forwarding or pathing Brownfield deployment VLAN:EPG Bridge between ACI and legacy Data Center (vlans) ACI Fabric provides a pervasive SVI that allows for a distributed default gateway

48 Design Considerations
ACI Distributed Default (Pervasive) Gateway Spine VXLAN 40-Gbps Leaf Distributed Default (Pervasive) Gateway accomplished in ASICs - VMware’s Distributed Logical Router (DLR) is similar in function but DLR is implemented in software (consumes sockets and machine resources) Default Gateway sits on every Leaf

49 Design Considerations
ACI Multihypervisor Normalization 40-Gbps VLAN VXLAN VLAN NVGRE VLAN VXLAN VLAN Hyper V Microsoft KVM Red Hat ESX VMware Bare Metal Server

50 Building an sddc - Architectural steps

51 Architectural Steps Define the application use cases and entry points for SDDC Self-Service provisioning of IT Infrastructure resources in support of cloud-based apps Identify the abstracted infrastructure layers required by application and process use cases Expose abstraction layer(s) and virtualization requirements for storage, network & compute Define abstracted/virtual infrastructure service Define detailed DC services based on application and process requirements NOT on current infrastructure capabilities

52 Architectural Steps Perform an infrastructure assessment focusing on use-case requirements and DC service Can alternatives fill gaps based on the ability to deliver abstraction, instrumentation, programmability (API), automation, policy-based mgmt. and orchestration? Define policies for infrastructure services and process requirements leveraging northbound and southbound APIs, policies and automation. Implement SDDC components based on use case(s). Test northbound and southbound APIs associated with the control and data planes to ensure infrastructure interoperability Integrate software-defined security. Programmatically enforce security-based policies to ensure the workflow models are enforced across the infrastructure layers

53 Architectural Steps Integrate policy-based orchestration and management. Policy-based orchestration and security requirements may be provided by a cloud management layer that resides above the SDDC

54 Future data center

55 Design Considerations
Future Data Center Design Considerations Spine/Nexus 9504 Spine/Nexus 9504 VXLAN 40-Gbps Leaf/Nexus 9372PX 7K DC Core IBM Flex Chassis #1 IBM Flex Chassis #2

56 Future DC with NSX Overlay
To Nexus 9372s (Leafs) NSX Edge Node Nexus 7000 (HA) Edge VLAN Transit Logical Switch (VXLAN) NSX Distributed Router VXLAN Logical Switch

57 NSX Transport Zone Design1
External Network NSX Edge X-Large NSX Edge X-Large NSX Universal Distributed Router Universal Logical Switch Universal Logical Switch Zayo DC LaGrange DC Transport Zone Compute VDS Compute Cluster 1 Compute Cluster 2 Edge VDS Mgmt/Edge Cluster Compute VDS Compute Cluster 1 Compute Cluster 2 Edge VDS Mgmt/Edge Cluster 1Kenyon Hensler (vmware Sr. Systems Engineer) approved

58 Cisco aci v. VMware nsx design comparison

59 Cisco ACI v. VMware NSX Design Comparison

60 Network Differentiators
ACI NSX Network Automation Enables full automation of all virtual and physical network parameters through a single API. Associated with Nexus 9K + AVS + DVS – to obstantiate the DVPort Group Automation is limited to virtual networks for virtual machines. No automation of physical switches is possible. Connecting to Legacy Networks Legacy workloads can be directly connected to the fabric, and/or legacy network devices can connect to the fabric using standard multi-chassis LACP, for instance, providing sub-second convergence. / Can create Border Leaf pair – use dynamic routed protocols from routed links from core. L2 software gateways running on ESXi must be used, which provides limited performance and slow convergence (greater than sub-second) in failure scenarios. *** Multicast is not supported (no PIM) **** Routing Implementation Once DLR is implemented, cannot do stateful routing L2/L3 VPN is useless – cannot support routing protocols over tunnels. Max. 8 EDGEs (8-way DLR/ECMP limitation)  lose one DLR, you lose 1/8th of your routing Routing is implemented in hardware at line rate for East-West and North-South regardless of workload connected to the fabric (physical or virtual), with sub-second convergence on failure scenarios. East-West routing can be done at the hypervisor, but it requires an extra distributed logical router control VM per IP domain space. North-South routing requires deploying ESR VMs per IP domain space. Redundancy and load balancing are possible, albeit with slow convergence. Requires approx. 20 VMs: 16 [8 x 2 (HA for EDGE)] (min) 2 [L2  # of bridges] 2 [L3 DLR] Essentially, requires a rack of compute for networking. No multicast (can flood) NAT’ing requires a multi-tier network for NSX EDGE (increased licensing cost) Network Availability Sub-second convergence is possible for link and node failures. Routing and bridging services implemented in servers and/or VMs react slowly to network changes. Sub-second convergence is not attainable. Tweak to 6 secs on NSX EDGE but bumping up timers at DLR (inside); internal route flaps can affect external network Advanced Network Service Insertion ACI L4-7 service insertion is done via an open SDK where partners can develop a device package to represent their services to the APIC controller. It support both virtual and physical service appliances. NSX L4-7 service insertion is accomplished through a closed API and is subject to licensing. It supports only virtual services appliances from certified partners (i.e., no physical appliance support) Network Differentiators

61 Server Differentiators
ACI NSX Server Scale Network centric. Requires more physical servers with corresponding licenses. This also leads to more port count requirements in the network. Hypervisor support All major hypervisors VMWare, Hyper-V, XEN, KVM. Obviously NSX is optimized for ESXi. There is however another product called NSX Multi Hypervisor that would accommodate hypervisors other than ESXi.

62 Nsx dual-dc ingress route correction tutorial

63 NSX Cross DC Ingress Route Correction after vMotion of VM

64 Cisco ACI Solution and Components

65 Cisco ACI Solution

66 Cisco ACI Solution Components – Prime Service Catalog
Self-Service Portal to allow employees to order IT Services Portal to Unified Catalog Can front-end Cisco Intelligent Automation for Cloud / 3rd-party cloud management and orchestration

67 Cisco ACI Solution Components – UCSD
Orchestration Engine Allows the creation of “basic” service offerings

68 Cisco ACI Solution Components – VACS
Virtual Application Cloud Segmentation Allows the creation of “fenced-in containers” (“secured sandboxes”) Add/Insert services

69 Cisco ACI Solution Components – ICF
Intercloud Fabric Create and manage workloads across multiple clouds Allows moving VMs to other providers

70 Next big thing in SDDC

71 The Next “Big” Thing in SDDC
Containers For example: Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. How is this different from virtual machines? Containers have similar resource isolation and allocation benefits as virtual machines but a different architectural approach allows them to be much more portable and efficient.

72 The Next “Big Thing” in SDDC
Containers Think of containers as “virtualized applications on bare metal” Users see one application, but there may be 10, 100 or 1000 servers servicing that application

73 The Next “Big Thing” in SDDC
Databases running in memory For example: SAP HANA

74 ACI Connectivity to Outside Networks

75 Data Center Interconnect – “Stretched Fabric”
Transit Leaf Border Leaf Border Leaf Transit Leaf WAN ACI supports 30 km or 50 km (DWDM)

76 Data Center Interconnect
Transit Leaf Border Leaf Border Leaf Transit Leaf 7K 7K OTV


Download ppt "John welby, chief strategist"

Similar presentations


Ads by Google