Presentation is loading. Please wait.

Presentation is loading. Please wait.

Arista Network Equipment deployment at USC

Similar presentations


Presentation on theme: "Arista Network Equipment deployment at USC"— Presentation transcript:

1 Arista Network Equipment deployment at USC
Azher Mughal National Research Platform Montana State University August 8, 2017 Presented by: Information Technology Services – August 2017

2 Disclaimer: This presentation is only to talk about the Arista hardware and software used at USC. We are not endorsing Arista, neither doing any marketing on their behalf. Presented by: Information Technology Services – August 2017

3 The Features desired by USC
Future Proof Platform Operationally easy to understand and maintain (Industry standard commands) Network Core with lowest Latency Deep buffers at the edges to absorb the traffic burstiness Port density and multiple speed options Supports Software Defined Networking Allows custom programming and integration with Northbound applications Supports VXLAN and BGP EVPN Vendor (as partner) support for Faster Deployment Presented by: Information Technology Services – August 2017

4 Arista Hardware in use at USC
Leafs Spine 7060CX-32S 7060CX2-32S 7280R / 7280R2 7504R Port Buffers: 7060CX-32S = 16MB 7060CX2-32S = 22MB 7280R = 4GB 7280QR-C72 = 16GB 7280QR-C72 Fixed 2U 7050 Copper/Fiber Presented by: Information Technology Services – August 2017

5 Some Interesting Features in Arista EOS
Presented by: Information Technology Services – August 2017

6 Interesting Features in EOS
Quad core CPU, helps in faster router convergence (up to 2M routes in select R2 models) Single Operating system for all platforms Linux shell to write code in your favorite language Multi Chassis LAG (MLAG): Provides Active-Active LAG. Easy to configure, Operate and Troubleshoot Virtual ARP (VARP): Anycast, supporting redundant Uplink paths CloudVision Supports Northbound control applications, seamless integration with hardware (imagine Science DMZ everywhere), Zero Touch Provisioning (ZTP), Smart System Upgrade (SSU) Latency Analyzer (LANZ) Realtime network performance and congestion monitoring, exportable to different format and also via Google protocol buffer Data Analyzer (DANZ) Tap Aggregation, Security OpenFlow support in select models (e.g. 7060) Presented by: Information Technology Services – August 2017

7 Diagnostics Tools available in EOS
BASH: Standard Linux shell. Run your own BGP daemon, your own extended CLI or monitoring. Tcpdump: All the hardware/logical switch interfaces are visible in the Linux shell. Standard tcpdump options work. Advanced Event Monitor (AEM): Can query the sqlite using standard sql ‘select’ commands. Tracing: Identify issues from logs saved by processes. Queue-length (LANZ): Provides details when Microbursts happened. Port Mirroring: Wire speed network analysis. Presented by: Information Technology Services – August 2017

8 Convergence - How fast do we converge when there is a failure?
Arista Sysdb (Info repository) Convergence - How fast do we converge when there is a failure? Linux Kernel Sysdb BGP OSPF, ISIS RIP ribd Forwarding Agents CLI 3rd party agent Extensible Operating System (EOS) eAPI OF Chef Puppet Splunk VXLAN LAG Linux Tools ExaBGP STP IGMP LLDP PIM BFD EBRA Programmability - How programmable is the switch or do you need license keys to access the kernel or SHIM? Features – Can we introduce new features and quickly without introducing new bugs? Bugs & Security Patches – Can we patch the switch without bringing down the forwarding plane? Hardware Resources EOS Versions – Can you truly use same software binary across all switch models with the same standard consistent CLI? EBGP & ECMP : link-down, in place adjacency update, BGP specific improvements Arad has 256K exact match table – moving host routes there helps scale host routes & increase space for LPM routes Note on improvements – relative to pre-l3scale software; tested in POC environments: Facebook, eBay, Citi TCAM LPM Exact Match FEC Hardware Table Management - How do we intelligently manage Merchant Silicon rather than relying on their SDK? Scale – Do we rely upon standard Merchant API’s or can we scale better in hardware? Presented by: Information Technology Services – August 2017

9 VXLAN Bridging – STP Domain Isolation
STP BPDU’s are not transported across the VXLAN tunnel Creating Separate STP domains within the local ports of each VTEP Interconnect disperse subnets with Layer 3 to 7 services Providing a logical multi-tiered network regardless of physical location Spanning Tree Domain 1 Spanning Tree Domain 2 STP BPDU Root Bridge leaf 1 Cost 0 STP BPDU Root Bridge leaf 2 Cost 0 VNI 1010  VLAN 10 VLAN 10  VNI 1010 Serv-1 Serv-2 802.1Q VLAN 10 802.1Q VLAN 10 Eth-1 Eth-49 VNI 1010 Eth-49 Eth-1 VTEP VTEP L3 Backbone Leaf-2 Leaf-1 Layer 2 Domain Presented by: Information Technology Services – August 2017

10 VXLAN Control Plane Options
The VXLAN control plane is used for MAC learning and packet flooding Mechanism to discover hosts residing behind remote VTEPs How to discover VTEPs and their VNI membership The mechanism used to forward Broadcast and multicast traffic within the Layer 2 segment (VNI) IP Multicast Control Plane VTEP join an associated IP multicast group (s) for the VNI(s) Unknown unicasts forwarded to VTEPs in the VNIs via IP multicast Support for Third-party VTEP(s) Flood and learn and requires IP multicast support – limited deployments HeadEnd Replication (HER) BUM traffic replicated to each remote VTEPs in the VNIs Replication carried out on the ingress VTEP. Support for Third-party VTEP(s) MAC learning still via flood and learn but no requirement for IP multicast HER with CloudVision eXchange (CVX) Local learnt MACs and VNI binding published to CVX CVX dynamically distributes state to remote VTEPs Support for Third-party VTEP(s) Dynamic MAC distribution, automated flood-list provisioning HA Cluster support for resiliency eVPN Model* BGP used to distribute local MAC to IP bindings between VTEPs Broadcast traffic handled via IP multicast or HER models Dynamic MAC distribution and VNI learning, configuration can be BGP intensive Support for Third-party VTEP(s) Presented by: Information Technology Services – August 2017

11 ITS Transformation Project
Presented by: Information Technology Services – August 2017

12 Traditional Four Corners at USC
There are four key designated buildings across the University Park Campus (UPC) chosen for redundancy and localizing the network connections Initially built using Cabletron / Enterasys / Extreme switching equipment (EOS / EOL) Blade CPU goes high even for unicast traffic (at 700Mbps) Same VLANs everywhere Hard to troubleshoot frequent spanning tree events happening Presented by: Information Technology Services – August 2017

13 Definition: Spine , Leafs , Spline
Provides connectivity to Edge Servers Leaf switches connects to all the Spines Requires deep buffers to absorb bursts Spine: Shortest path for East to West Traffic (usually Leaf=>Spin=>Leaf) Lowest possible latency for port to port traffic (all ports operate at the same speed) Spline: Collapsed Spine and Leaf single tier architecture in the form of larger dense chassis L3 ECMP Leaf &Spine everywhere Presented by: Information Technology Services – August 2017

14 Data Center Spine Leaf Architecture
Multiple leaf switch pairs are provisioned, connected to the spine, and provide different level of services to the end devices. Leafs are providing L2 / L3 boundary. VXLAN will be used to extend VLANs across the L3 network The 4 main categories of leaf switch pairs are: Production Development Services Utilities and Management Presented by: Information Technology Services – August 2017

15 Colocation Spine Leaf Architecture
Multi-tenant Environment, servicing between 40 to 45 customers Colo customers are provided Firewall, or they deploy their own firewall Main goals are: Secure and extremely reliable environment, redundant downlinks/leafs/spines/firewalls Connections starting with a minimum of dual 10GE links, can go up to 100GE Presented by: Information Technology Services – August 2017

16 BGP across the Spines, Spine and Leafs
Extending VLANs across 250 buildings and maintaining spanning tree is operationally getting impossible. Creating BGP Domains: Each Spine is in its own AS number. All the Leafs are in their domains and leafs talk with the respective Spine Data Center leaf Pairs and Campus Spine Co-location leaf Pairs and Campus Spine Scalable, Easy to maintain, Segmentation of policies and provide ease in operational troubleshooting Presented by: Information Technology Services – August 2017

17 Village is a Green Model, for the rest of Campus
Layer2 Boundary Layer3 Core (BGP) Routing via BGP VLANs from one building, or from other campus are not allowed to land in the Village Presented by: Information Technology Services – August 2017

18 Village Uplinks to Campus Core
Leaf Pair Campus Spine Pair Village uplinks to Campus spine are dual 40Gbps in Active-Active Configuration Presented by: Information Technology Services – August 2017

19 Network Infrastructure (USC Village, green zone)
Arista 7280R (switch/router) is used as the Village leafs or border HPE switches are used as the Edge switches to provide Gigabit ports for end users HPE Switch models are either 3810M, 5406 zl2 or 5412 zl2 (based on the port requirements in each building) Wired cabling is all Cat6a Aruba ac everywhere Dynamic port assignment based on user credentials (certs, 802.1x, mac auth) Presented by: Information Technology Services – August 2017

20 Conclusion Our ITS transformation project is steady on its path to uplift the campus infrastructure With the help of Arista and their state of the art equipment, we are actively deploying a highly flexible, future-proof infrastructure that accommodates significant increases in capacity and at the same time reducing the cost. With software-defined technologies from leaders such as Openstack and Vmware, we are hoping to deploy an agile and converged infrastructure to respond to business dynamics in a multi-tenant services area. We are hoping to soon deploy full scale intelligent monitoring and telemetry support across the entire infrastructure for efficient fault isolation and faster response to users. Presented by: Information Technology Services – August 2017

21 Thank you Presented by: Information Technology Services – August 2017


Download ppt "Arista Network Equipment deployment at USC"

Similar presentations


Ads by Google