Download presentation
Presentation is loading. Please wait.
Published byMyrtle York Modified over 6 years ago
1
Multicloud as the next generation cloud infrastructure
Deepti Chandra, Jacopo Pianigiani
2
Agenda The “Application-aware” Cloud Principle
Problem Statement in Multicloud Deployment SDN in the Multicloud Building Blocks Building the Private Cloud – DC Fabric Building the Private Cloud – DC Interconnect (DCI) Building the Private Cloud – WAN Integration Building the Private Cloud – Traffic Optimization At a very high level the purpose of this discussion is to: Identify the problem statement Define requirements Propose the best feasible solution
3
The Application-aware Cloud Principle
What is the best DC environment? – Need for multicloud Operational model yesterday -> What operators need today
4
The Big Picture: Cooperative Clouds
MANAGEABILITY & OPERATIONS REQUIRING SECURITY CONNECTIVITY USE RUNNING MULTIPLE LOCATIONS Embedded (e.g. device or vehicle), in a Data Center End Users People, vehicles, appliances, devices Applications Made of software components MULTIPLE ENVIRONMENTS Containers, VMs, BMS MANAGEABILITY & OPERATIONS CONNECTIVITY SECURITY CPE Public Cloud (VPCs) DataCenters IP Fabric VMs Remote Branch Office FIREWALL Containers BMS Multi-site DC / Private Cloud simple, easy to use, versatile and open solution to automate the transformation of multiple independent cloudinfrastructures into a seamless managed multicloud, with full control and visibility on all services delivered through the multicloud. It leverages the principles ofsoftware defined networking and network overlays by extending the managed infrastructure from the computes running in a private cloud environments into thephysical network devices in a data center, cloud interconnects wide-area- networks and public cloud tenants, through a programmatic approach built on open APIsand standard protocols suitable for each environment where the infrastructure and the overlay services will be delivered. Removes the complexity of networking technologies and automation by abstracting the capabilities of different cloud environments - each one with its own ‘language’ -through simple, intuitive concepts like “two-tiered web application service” , allowing the user to focus on the revenue-generating services rather than the painfuloperational networking procedures. agile and accelerate value to the business by adopting a multicloud architecture Additionally,with the increasing maturity and adoption of public clouds, many enterprises have started to leverage the public cloud IaaS/PaaS/SaaS offerings tocomplement their IT/Cloud departments managed private cloud offerings; however, this has led to a growing disconnect between the internally-owned corebusiness assets and applications, running in the enterprise data centers, and the many purpose-built application-specific environments that many enterprise linesof business have created independently from their IT/Cloud department using the public cloud offerings. holistic programmatic approach addressing each of the different cloud infrastructuressuch as the private clouds with data center devices , servers and hypervisors with physical and virtual compute platforms; the public clouds where theinfrastructure is managed by the cloud provider, and the wide area networks interconnecting these private and public cloud environments. Contrail EnterpriseMulticloud manages all these disparate infrastructures as a unified ‘fabric’, allowing the user to create and manage the application to application networkingservices (also known as ‘overlays’) as if these distinct cloud environments were a single seamless cloud while being able to monitor their performance andbehavior and automating the delivery of such services with simple point and click actions. To reduce this complexity, they are using cloud management platforms (CMPs) and cloud service providers (CSPs), or cloud service brokers (CSBs). They can then manage their multi-clouds as one single cloud. Yet this means, he explains, that they can only use a “subset of features from each cloud; that is, take the “least common denominator” approach.” So, in my view, there needs to be an assessment of the costs of cloud downtime. I’d also ask myself the question: Does that cost more if I deduplicate applications and data across cloud locations of the same provider or different cloud providers? Using a multi-cloud should make financial and operational sense. There must be a business case for it, otherwise it won’t deliver any type of efficiency savings. It’s also important to consider the impact that network and data latency will have on each type of cloud. Where there is a need for private access, low latency can provide for time-critical applications, such as databases that need to be behind security walls. Yet, for other applications in the same organisation, which are open and less time critical for remote or public access, public cloud is the way forward. So which cloud model should you adopt and embrace? Take cars as an example. There are different car models for different functions. Most families that live outside major conurbations have a number of variants: small commuter cars, people carriers such as SUVs for the family, etc. Even those that live in places such as London with a myriad of public transport options. .owning a car there is often unjustifiable and yet you will still see people that own their own vehicles. Yet there will always be people who’ll hire a vehicle from time-to-time, take a taxi or public transport. So, in the same way, there isn’t a single model that fits all aspirations and requirements. The same principle applies whenever an organisation selects a particular cloud model. business and service continuity, as well as of disaster recovery Telco POPs Home
5
EXISTING SYSTEMS/APPS
The Traditional Way PRIVATE CLOUD DECISIONS DRIVEN BY: Existing assets Skills and know-how Security & Confidentiality Costs and TCO control Application specific requirements (scale, latency, performance, hypervisors,…) APPLICATIONS PUBLIC CLOUD HOSTING EXISTING SYSTEMS/APPS I need to deliver a service to my users Build or buy or lease an Execution environment Risk of Relying On One Vendor - This “domino effect” was a prime example of what can happen when too much of the internet relies on one single service. Business Applications: No 'One Size Fits All’ - Do you shop at a single store for all your clothing? Of course not. Businesses are undergoing a similar evaluation process as they determine what type of platform or service is ideal for individual applications. (clothes / cars). For example, for a non-competitively differentiating, standard Microsoft Office 365 collaboration application, the public cloud (where security and reliability may not be as strong as other cloud options) might suffice. But for an application that is truly mission-critical and/or revenue generating, a virtual private cloud environment might be the better choice; or perhaps even no cloud at all – keeping the workload on premise. Many businesses are having success with similar hybrid IT approaches – using the cloud for more commodity-type services, while keeping workloads they’re not willing to chance in-house.
6
EXISTING SYSTEMS/APPS
What Has Changed Why Data Centers Need Multicloud? Today most applications leverage cooperation between components deployed across multiple cloud infrastructure NEW APPLICATIONS Centralized PRIVATE CLOUD Distributed Racks cnt VM PUBLIC CLOUD Resource pooling cnt Containers DR bms VM SaaS vm replic HOSTING PaaS bursting BMS ... cnt cnt bmsaaS EXISTING SYSTEMS/APPS DECISIONS DRIVEN BY: User experience Costs and TCO control Agility (time to change) Security and confidentiality Skills and know-how Application specific requirements (scale, latency, performance, hypervisors, …) I need to deliver a service to my users Today – The New Cloud He then asks how multi-cloud relates to hybrid cloud, while stating that cloud model names revolve around patterns of use (e.g. public cloud, private cloud, and hybrid cloud). In his view, many people are using multi-cloud and hybrid cloud interchangeably. Yet, he emphasises that both cloud models have some distinct characteristics. Hybrid cloud, for example, is commonly used as a term to describe when a public cloud is used with a private cloud. “If you use multiple public clouds with a private cloud, that is still a multi-cloud. (Some people might call it a hybrid multi-cloud, which is fine),” he explains. However, a multi-cloud architecture typically uses two or more public clouds. Essentially, the aim of multi-cloud is to avoid any reliance on a single public cloud provider. That makes sense if the data is being mirrored between the different cloud service providers to maximise uptime whenever disaster strikes – so long as the data created by an organisation is frequently backed up. Not every cloud provider offers the same products and features, and some will be better at supporting differing workload types than others. Sticking with a single cloud service provider will limit your choices and agility, and hamper your organizations’ ability to maximize its overall cloud strategy. The ability to toggle between various cloud services, in order to meet needs that are changing daily (if not hourly) is also an important foundation for DevOps
7
PROBLEM STATEMENT IN THE MULTICLOUD DEPLOYMENT
Challenges with the multicloud environment
8
Challenges of the Multicloud
A set of independent ‘fabrics’ Poor User Experience Operational Complexity Lack of Automation PUBLIC CLOUD CloudFormation REST API IPSEC or DirectConnect BGP PRIVATE CLOUD DC Team 1 Tool A Day 0 configuration Software upgrade Team 2 Tool B Service management DC AWS Tool B PUBLIC CLOUD Azure rest API BGP IPSEC Team J Troubleshooting PRIVATE CLOUD DC BGP EVPN/ VXLAN Netconf gRPC BMS VIRTUALIZED Visibility and reporting Team N Tool K Different skillsets for different clouds Manual operations for daily tasks Long lead-times for change management Inconsistent visibility for distinct environments benefits like infrastructure scale and elasticity, a better match between applications’ software structure (distributed microservices) and deployment (containers and virtual machines), operational costs savings and fast development and release cycles.
9
A Day in the Life of DC/Cloud Operations
THE END USER THE OPERATOR Provision tenant Image servers Create containers Create/select policies Request networking service to public cloud Create EC2 instance ... “I need a two-tier application execution environment with these characteristics” “Can I have my DB cluster up and running by next week and connected to the Web front end ?” COMPLEXITY INCONSISTENCY REVENUE-LOSS “DB Cluster can’t talk to Web server” Need to correlate & contextualize: “which IP1/MAC1 on VNI X on Switch A can’t talk to IP2/MAC2 on VNI Y on Switch B?” LONG LEAD TIME PROVISIONING MANAGEMENT VISIBILITY
10
SDN in the Multicloud
11
WHAT DOES SDN OFFER IN THE MULTICLOUD?
Single pane of glass orchestration across clouds Visibility and unified management across clouds WHAT DOES SDN OFFER IN THE MULTICLOUD? MULTICLOUD NETWORKING AS A SERVICE Secure service delivery across clouds Federation to unify controllers across clouds Slide 6 says why operators should use a multi cloud environment This section elaborates on why software defined networking is needed Single pane of glass orchestration across clouds Visibility and unified management across clouds Security Routing at the hypervisor (containerization) Federation to unify controllers across clouds Use a single pane of glass management layer to manage across the clouds Don’t forget about the performance requirements of moving data between the clouds Ask yourself: ‘What is the impact of latency on your application for in-house use and cloud users?’ Remember that data must be encrypted as it flows around the clouds, and this can’t easily be achieved with traditional WAN optimisation tools.
12
Multi-cloud Networking-as-a-service for Any Workload and Any Cloud
Public clouds Automate Private Clouds / Multi-cloud infrastructure HV HV HV CONTROLLER Contrail Enterprise Multicloud follows a pure software-defined approach that spans the boundaries and use cases across any physical or virtualized private cloudinfrastructure and any public cloud, providing networking-as- a-service for workloads running on physical, virtualized or containerized form factors in any cloud environment by integrating with any orchestration tools such as OpenStack, Kubernetes, Mesos, OpenShift or VMware cloud management systems along withpopular DevOps tools like Ansible, Helm. It unifies the semantics and policy automation of application-to- application networking independently of the cloudenvironment by using a common consistent data model for overlay services/policies , while using a cloud-specific language to program and control the specificcloud environment. It allows the use of business-oriented language to define policies to restrict /allow applications communicating with each other in the multicloudenvironment. It also provides a consistent view of performance and health of networking devices, application workloads, storage and computes across the mutlicloud. Interconnect Fabrics (Private to Public Multicloud) Interconnect Fabrics (Private Multicloud) One-click Application Services CONTROLLER Private Clouds with any workload HV Predictive Analytics and Visibility
13
A Unified View Across Cloud and Networking Operations
Network services API CONTROLLER bgp, mp-bgp evpn, ip-vpn IP, IPSEC full lifecycle managementof the data center infrastructure by automating the data center operations for Day 0 (when a new datacenter is built), scale out (adding racks of devices and servers), planned maintenance (such as softwareupgrades and rollbacks), daily tenant/service operations (such as creating a two-tiered- web-applicationnetworking service across any type of workloads like bare metal servers, appliances, virtual machines,containers, and, public cloud instances), managing networking policies to control traffic flows within andacross virtual networks, and implementing advanced networking services such as service chaining (e.g.forcing specific flows crossing the boundaries of virtual networks), and managing the lifecycle of physicalworkloads (e.g. Bare metal servers PXE boot/reimaging, as well appliances software imaging andconfiguration). Enforce security based on worload mobility – use application attributes instead of network coordinates application topology Visualize security intent mp-bgp evpn, ip-vpn Underlay and overlay configuration based on roles assignments Multiple roles and fabrics support per device (IP Clos, Interconnects) MANAGEMENT evpn/vxlan, mplsogre, mplsoudp mp-bgp evpn/ip-vpn, netconf Rest/https netconf EVPN mp-bgp peering to DC devices and bgp to external fabrics (e.g. VPCs) Routing equivalence between physical roles and virtualized elements CONTROL HV netconf sflow,gRPC VGW HV dhcp,tftp neutron/ kubectl Support of both native device protocols infrastructure and service elements TELEMETRY & ANALYTICS VGW BMS, BMS with SRIOV Bare Metal Server Server with OVS Containers VM HV
14
Building Blocks
15
Data Center Requirements
Design Requirement Technology Attribute Rising EW traffic growth Easy scale-out Resiliency and low latency Non-blocking, fast fail-over Agility and speed Any service anywhere Open architecture No vendor lock-in Design simplicity No steep learning curve Architectural flexibility EW, NS & DCI
16
Common Building Blocks for Data Centers
Public Internet DCI MPLS/IP Backbone Peering Routers Public Cloud Data Center Fabric Data Center Interconnect WAN integration Hybrid cloud connectivity CLOS Fabric (IP, MPLS) WAN or Dark fiber Private/Public WAN Private/Public WAN CORE High Performance Routing On-prem DC extension into public cloud DC Edge CORE COLO BASED INTERCONNECT Data Center 2 Service Edge Boundary Data Center 2 Spine DC Edge (collapsed)) OR Spine DC Edge OR Spine DC Edge DC Edge DC Edge Cloud Edge TORs Data Center 1 Data Center 1 Data Center 1 (on-prem) Data Center 1 (public cloud)
17
Building the Private Cloud – DC Fabric
18
Defining Terminology…
Edge (DC Edge) Optional: Can be collapsed into one layer F F Fabric (DC Core/Interconnect) POD-1 S S S S POD-N Spine (DC Aggregation Layer) S S S S L L L L L L L L L L Leaf (DC Access Layer)
19
Building the DC Fabric Let’s start with the smallest unit – the POD
Leverage BGP constructs to achieve L2/L3 traffic and multi-tenancy L3 gateway placement can be at the leaf or spine Hierarchical route-reflection for reduced control plane state and redundancy Easy integration with L3VPN with no added provisioning Service insertion for EW and/or NS traffic (inter-tenant inter-subnet)
20
Architectural Flexibility
Containerization influence on network infrastructure PROBLEM STATEMENT Communication needs to be enabled between 2000 containers residing on servers spread across racks IP FABRIC Connection between servers and TORs can be Layer 2 or Layer 3 Containers fire up and decommission very rapidly compared to virtual machines – a perfect fit for the ephemeral nature of today’s short-lived workloads, which are often tied to real world events and “bursty.” And last but not least, the fewer dependencies of containers on the OS give more flexibility to deploying applications on different cloud providers and operating system, which play into organizational objectives of avoiding lock-in. ability to operationalize containers c1 c2 … c2000
21
Building a Fabric for Containers
ESI-E IP1 IP2 IP2000 c1 c2 … c2000 /31 VLAN /31 BGP/OSPF L3 load-balancing & redundancy IP FABRIC VETP IP FABRIC VLAN VLAN ESI-A ESI-E Container IPs advertised over the protocol peering session between servers and TORs c1 c2 … c2000 IP1 VLAN 1001 IP2 VLAN 1002 IP2000VLAN 3000 Routing agent Layer 2: Trunk ports with each app container identified by a separate VLAN mapped to VNIs on hardware VTEPs Benefits: Higher scale capabilities, active-active load balancing with open standards (EVPN N-way multihoming) Layer 3: Routing agent can be residing on the server hypervisor Benefits: Routing table scale, greater provisioning benefits by using unnumbered addresses for peering between servers and TORs unnumbered interfaces attempt to do for IPv4 what link-local addresses offer IPv6 Host routing difficult to manage at scale
22
Design Flexibility WAN DC-1 DC-2 E E Edge (DC edge) F F Fabric (DCI + DC edge) F F Fabric POD-N POD-N POD-1 POD-1 S S S S Spine S S S S Spine L L L L L L Leaf BL BL L L L L Leaf Border-Leaf (DCI) RACK-1 RACK-2 RACK-N RACK-1 RACK-2 RACK-N App 1 App 2 App 3 Centralized or distributed routing – design choices based on requirements
23
Building the Private Cloud – DC Interconnect (DCI)
24
Let’s Draw a Picture – DCI
Super-Spine / Fabric Super-Spine / Fabric F F F F POD-N POD-N Dark Fiber Shared WAN Private Backbone POD-1 POD-1 S S S S Spine S S S S Spine BL L L L L L Leaf BL L L L L L Leaf RACK-1 RACK-2 RACK-N RACK-1 RACK-2 RACK-N Service Block (Security insertion) Service Block (Security insertion) App 1 App 2 App 3 App 4
25
EVPN – DCI Design Options
Over The Top (OTT) Segmented Approach Layer 3 DCI Extended control plane Interconnect used as transport (EVPN unaware) Clear demarcation Interconnect EVPN aware MPLS TE in core, L2 stretch Clear demarcation Interconnect EVPN unaware MPLS TE in core, NO L2 stretch Design simplicity Larger deployments Larger deployments Scaling constraint Design thought Layer 3 only
26
DCI with Data Plane Stitching
DCI Options DCI (EVPN unaware) DC-1 (EVPN-VXLAN) OTT DCI DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels end-to-end) Support for L2 and L3 workloads Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs) OR Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs) DCI (EVPN unaware) DC-1 (EVPN-VXLAN) DCI with Data Plane Stitching DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels confined to DC) Support for L2 and L3 workloads Data-plane domain (VXLAN or MPLS confined to WAN) Data-plane stitching or translation at DC edge (with and without IT interfaces) L3 DCI DCI (EVPN unaware e.g. L3VPN over MPLS core) DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels confined to DC) Data-plane domain (VXLAN tunnels confined to DC) Tenant IP only routes advertised into the core Support for L3 workloads ONLY
27
Over the Top (OTT) – DCI Control plane is extended across sites with the connecting infrastructure used as transport only (EVPN unaware) DCI (EVPN unaware) DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels end-to-end) Support for L2 and L3 workloads Extended EVPN Control-plane domain (MP-iBGP same overlay AS across DCs) OR Segmented EVPN Control-plane domain (MP-eBGP different overlay AS across DCs)
28
Segmentation of DC & WAN domains
Clear demarcation of DC and WAN boundaries, connecting infrastructure is EVPN aware DCI (EVP-MPLS) OR (EVPN-VXLAN) DC-1 (EVPN-VXLAN) DC-2 (EVPN-VXLAN) Data-plane domain (VXLAN tunnels confined to DC) Data-plane domain (VXLAN or MPLS confined to WAN) Data-plane domain (VXLAN tunnels confined to DC) Support for L2 and L3 workloads Data-plane stitching or translation at DC edge
29
Layer 3 DCI Only Layer 3 connectivity is extended across DCs (no Layer 2). Data plane domain is confined within DC and not extended across DCs. DC-1 (EVPN-VXLAN) DCI (EVPN-unaware e.g. L3VPN over MPLS core) DC-2 (EVPN-VXLAN) Tenant IP only routes advertised into the core Data-plane domain (VXLAN tunnels confined to DC) Data-plane domain (VXLAN tunnels confined to DC) L3VPN or EVPN Type 5 Support for L3 workloads ONLY
30
Building the Private Cloud – WAN Integration
31
How are host IP prefixes exchanged between L3 gateway/s and DC edge/s so as to be advertised out of the DC? WAN /32 L3VPN (MPLS core) or EVPN Type 5 NLRI (MPLS or IP core) DC edge/s IP-VRF.inet.0 /32 Host routes can be exchanged between L3 gw/s and super-spine (DC-edge that connects to the WAN) using EVPN Type 5 L3 gateway/s (Leaf or Spine) IP-VRF.inet.0 /32 Host IP routes are present in the IP-VRF of the L3 gw H1 /32
32
EVPN Route Type 5 – Classification
draft-ietf-bess-evpn-prefix-advertisement Route Type 5 Pure Type 5 model (Interface-less IP-VRF to IP-VRF) Gateway address model (Interface-ful IP-VRF to IP-VRF) VXLAN VXLAN MPLS MPLS Type 5 route provides all necessary forwarding information Type 5 route needs recursive route resolution for forwarding. The lookup is for an IP prefix but forwarding information is extracted from Type 2 route
33
Pure Route Type 5 Model DC DC Route Type 5 , IP : 100.0.30/24
IPVPN Tenant 1 IPVPN Tenant 2 IPVPN Tenant 1 IPVPN Tenant 2 GW PE GW PE Tenant 1 Tenant 2 Tenant 1 Tenant 2 /24 /24 /24 /24
34
Packet Walk – Pure Route Type 5
MAC-VRF IP-VRF (VRF_TENANT_1) IP-VRF (VRF_TENANT_1) MAC-VRF VNI 5010 irb.5010 VNI 1020 irb.5020 VNI 5020 VNI 1020 2 3 D-IP: LEAF-3,4 S-IP: LEAF-1,2 VNI : VNI 1020 LEAF-1 LEAF-2 D-MAC: ROUTER-MAC (LEAF-3,4) S-MAC: ROUTER-MAC (LEAF-1,2) D-IP : IP4 S-IP : IP1 LEAF-3 LEAF-4 INGRESS VTEPs EGRESS VTEPs D-MAC: VRRP MAC S-MAC: MAC1 D-IP : IP4 S-IP : IP1 D-MAC: MAC4 S-MAC: IRB MAC D-IP : IP4 S-IP : IP1 1 4 VRF_TENANT_1 H1 (VLAN 10) IP1 = MAC1 = 00:00:1e:63:c8:7c VRF_TENANT_1 H4 (VLAN 20) IP4 = MAC4 = 00:00:00:93:3c:f4 Data packet will be sent as a L2 Ethernet frame encapsulated in a VXLAN header over an IP network across the DCs The spine, which acts as a DC GW router, must be capable of performing L3 routing and IRB functions if EVPN/VXLAN is enabled between DC GW router and ToR. For DC application when EVPN/VXLAN is enabled at each DC GW router, the data packet going across the DCs will also be encapsulated in the VXLAN A globally unique VNI is provisioned for every customer’s L3-VRF. The VNI will be used to identify the customer L3-VRF at the egress (like an MPLS table label) A chassis MAC will be used as the inner DMAC for the VXLAN packet. The chassis MAC will be shared among different customer IP-VRFs
35
EVPN Route Type 5 vs L3VPN NLRI
EVPN Pure Route Type 5 NLRI * 5: :30::0:: ::24/304 RD Eth-TagID Prefix info Similar information carried across both NLRI types L3VPN NLRI :30: /24 RD Prefix info
36
Benefits with EVPN Type 5
Unified solution end to end with one address family inside the DC and outside Data plane flexibility with EVPN – use over MPLS or IP core If you do not have MPLS between DCs for DCI It is not possible to run L3VPN over VXLAN For control plane, Route Type 5 is the only option Hybrid cloud connectivity (Type 5 with VXLAN over GRE/IPsec)
37
Building the Private Cloud – Traffic Optimization
38
What is VMTO ? Virtual Machine Traffic Optimization
Resolves ingress and egress traffic tromboning, focusing on north-south traffic optimization NO Layer 2 stretch – different summary routes are advertised from each Data Center. NO traffic tromboning – H0 sends traffic to DC-1 to reach H1 and to DC-2 to reach H2 Remote host Route table on remote host /24 /24: NH DC-1 /24: NH DC-2 H0 WAN /24 /24 ... ... DC-1 L3 GW for VLAN 100 only in DC-1 L3 GW for VLAN 200 only in DC-2 DC-2 H1 H2 /32 /32 VLAN 100 VLAN 200
39
Ingress and Egress North-South Traffic Optimization
Layer 2 stretch – Host prefix routes are advertised from each Data Center NO ingress traffic tromboning – H0 sends traffic to DC-1 to reach H1 and to DC-2 to reach H8 NO egress traffic tromboning – To reach H0, H1 sends traffic to L3 gateway in DC-1 and H8 sends traffic to L3 gateway in DC-2 Remote host Route table on remote host /24 /32: NH DC-1 /32: NH DC-2 H0 WAN /32 /32 ... ... DC-1 L3 GW for VLAN 100 only in DC-1 (VGA: ) L3 GW for VLAN 100 only in DC-2 (VGA: ) DC-2 /32 H1 H8 /32 VLAN 100 VLAN 100
40
How to Avoid Egress Tromboning?
Remote Host H0 /32 WAN SPINE-1 SPINE-2 SPINE-3 SPINE-4 VGA VGA DC-1 DC-2 VGA VGA Each leaf device prefers local DC L3 gateways LEAF-1 LEAF-2 LEAF-3 LEAF-4 VLAN 100 VLAN 100 H1 /32 Distributed layer 3 anycast gateway function ensures, local DC gateway preferred (even on host migration)
41
How to Avoid Ingress Tromboning?
Due to the lack of specific host routes, summary route from either data center could be preferred. Assuming the BGP path selection algorithm on PE-3 prefers the summary route advertised from DC-2 /24 *[BGP/170] from DC-1 active [BGP/170] from DC-2 inactive Control Data H0 DC-1 DC-2 /24 /24 PE-3 (WAN edge) DC-1_GW (DC edge) PE-1 (WAN edge) DC-2_GW (DC edge) PE-2 (WAN edge) WAN Layer 2 stretched across DCs DC edge can be leaf/spine/super-spine EVPN-VXLAN EVPN-VXLAN When host H0 needs to reach H2 (DC-2), traffic from host H0 is sub-optimally routed to DC-1 which then forwards it to DC-2 (H1) (H2)
42
No Ingress Tromboning DC-1 DC-2 WAN
/32 *[BGP/170] from DC-1 active /32 *[BGP/170] from DC-2 active H0 DC-1 DC-2 /32 /32 PE-3 (WAN edge) DC-1_GW (DC edge) PE-1 (WAN edge) DC-1_GW (DC edge) PE-2 (WAN edge) WAN EVPN-VXLAN EVPN-VXLAN No tromboning due to host route availability (exact host location awareness) (H1) (H2)
43
Thank You 43
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.