SDN Architect, Nov 8 2013 Vinay Bannai NEUTRON HYBRID MODE.

Slides:



Advertisements
Similar presentations
And many others…. Deliver networking as part of pooled, automated infrastructure Ensure multitenant isolation, scale and performance Expand.
Advertisements

Virtual Switching Without a Hypervisor for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton)
Windows IT Pro magazine Datacenter solution with lower infrastructure costs and OPEX savings from increased operational efficiencies. Datacenter.
High Availability Deep Dive What’s New in vSphere 5 David Lane, Virtualization Engineer High Point Solutions.
© 2012 IBM Corporation Architecture of Quantum Folsom Release Yong Sheng Gong ( 龚永生 ) gongysh #openstack-dev Quantum Core developer.
Connect communicate collaborate GN3plus What the network should do for clouds? Christos Argyropoulos National Technical University of Athens (NTUA) Institute.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. | Oracle’s Next-Generation SDN Platform Andrew Thomas Architect Corporate Architecture.
CloudStack Scalability Testing, Development, Results, and Futures Anthony Xu Apache CloudStack contributor.
System Center 2012 R2 Overview
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
Take your CMS to the cloud to lighten the load Brett Pollak Campus Web Office UC San Diego.
Open Stack Summit – Hong Kong OPENSTACK
Virtualization of Fixed Network Functions on the Oracle Fabric Krishna Srinivasan Director, Product Management Oracle Networking Savi Venkatachalapathy.
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Software Defined Networking.
Network Overlay Framework Draft-lasserre-nvo3-framework-01.
“It’s going to take a month to get a proof of concept going.” “I know VMM, but don’t know how it works with SPF and the Portal” “I know Azure, but.
Jennifer Rexford Princeton University MW 11:00am-12:20pm Data-Center Traffic Management COS 597E: Software Defined Networking.
Networking Components
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
BGP L3VPN Virtual PE draft-fang-l3vpn-virtual-pe-01
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
QTIP Version 0.2 4th August 2015.
SDN Problem Statement and Use Cases for Data Center Applications Ping Pan Thomas Nadeau November 2011.
CNT-150VT. Question #1 Your name Question #2 Your computer number ##
Data Center Network Redesign using SDN
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF.
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Virtualization Infrastructure Administration Network Jakub Yaghob.
CustomerSegment and workloads Your Datacenter Active Directory SharePoint SQL Server.
Network Aware Resource Allocation in Distributed Clouds.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
© 2010 IBM Corporation Plugging the Hypervisor Abstraction Leaks Caused by Virtual Networking Alex Landau, David Hadas, Muli Ben-Yehuda IBM Research –
MDC-B350: Part 1 Room: You are in it Time: Now What we introduced in SP1 recap How to setup your datacenter networking from scratch What’s new in R2.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Vic Liu Liang Xia Zu Qiang Speaker: Vic Liu China Mobile Network as a Service Architecture draft-liu-nvo3-naas-arch-01.
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC THAT’S THE ANSWER WHAT’S THE QUESTION? Software Defined Networking Dan DeBacker Principal.
Network Virtualization in Multi-tenant Datacenters Author: VMware, UC Berkeley and ICSI Publisher: 11th USENIX Symposium on Networked Systems Design and.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Turn Bare Metal Into Silver Lining With SCVMM 2012, Today! Mark Rhodes OBS SESSION CODE: SEC313 (c) 2011 Microsoft. All rights reserved.
CEG 2400 FALL 2012 Chapter 10 Virtual Networks and Remote Access 1.
| Basel Fabric Management with Virtual Machine Manager Philipp Witschi – Cloud Architect & Microsoft vTSP Thomas Maurer – Cloud Architect & Microsoft MVP.
Introduction to Avaya’s SDN Architecture February 2015.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Embrace the Future of.
Deploying Highly Available SQL Server in Windows Azure A Presentation and Demonstration by Microsoft Cluster MVP David Bermingham.
IETF95.
@projectcalico Sponsored by Simple, Secure, Scalable networking for the virtualized datacentre UKNOF 33 Ed 19 th January 2016.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
Shaopeng, Ho Architect of Chinac Group
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:
Planning and Troubleshooting Routing and Switching
Sebastian Solbach Consulting Member of Technical Staff
AWS COURSE DEMO BY PROFESSIONAL-GURU. Amazon History Ladder & Offering.
Network+ Guide to Networks 6th Edition
Indigo Doyoung Lee Dept. of CSE, POSTECH
Network Virtualization
HC Hyper-V Module GUI Portal VPS Templates Web Console
NTHU CS5421 Cloud Computing
See your OpenStack Network Like Never Before
IBM Power Systems.
PayPal Cloud Journey & Architecture
Openstack Summit November 2017
Presentation transcript:

SDN Architect, Nov Vinay Bannai NEUTRON HYBRID MODE

Confidential and Proprietary2 PayPal offers flexible and innovative payment solutions for consumers and merchants of all sizes. 137 Million Active Users $300,000 Payments processed by PayPal each minute 193 markets / 26 currencies PayPal is the World’s Most Widely Used Digital Wallet ABOUT PAYPAL

Confidential and Proprietary3 Data Center Architecture Neutron Basics Overlays vs Physical Networks Use Cases Problem Definition Hybrid Solution Performance Data Analysis Q&A INTRODUCTION

Confidential and Proprietary4 DATA CENTER ARCHITECTURE Internet Racks Layer-3 switch Access Data Center Layer-3 switch Aggregation Layer-3 router Core Bisection BW

Confidential and Proprietary5 NEW DATACENTER ARCHITECTURE Internet vswitches Layer-3 switch Access Data Center Layer-3 switch Aggregation Layer-3 router Core Bisection BW VM Edge Layer

Confidential and Proprietary6 DATACENTER WITH VSWITCHES Layer-3 switch Access Data Center VM Racks VM

Confidential and Proprietary7 NEUTRON BASICS

Confidential and Proprietary8 Overlays provide connectivity between VMs and Network Devices using tunnels The physical core network does not need to be re-provisioned constantly The tunneling encap/decap is done at the edge in the virtual switch Decouples the tenant network address from the physical Data Center network address Easy to support overlapping address Tunneling techniques in vogue −VXLAN −STT −NVGRE OVERLAY NETWORKS

Confidential and Proprietary9 Physical Networks connect VM’s and Network Devices using provider network VM’s are first class citizens with the hypervisor and the networking devices No tunneling protocols used Tenant separation is achieved by using VLANs or IP subnetting Hard to achieve overlapping address spaces Underlying network needs to be provisioned with VLANs PHYSICAL NETWORKS

Network Virtualization Layer L2 VM L2 L3 VM Tenant on Overlay Network Tenant on Physical Network PHYSICAL VS OVERLAY

Confidential and Proprietary11 PROS & CONS FunctionHypervisorBridged VMs (VLAN) Tunneled VM’s ThroughputBestBetterWorse LatencyBestBetterWorse FlexibilityWorseBetterBest Overlapping IP addresses Worse Best Operational Dependency WorseBetterBest

Confidential and Proprietary12 Production Environment −Production website across multiple data centers −Low latency and high throughput −Bridged Mode Mergers & Acquisitions Private Community Cloud −Private Community Cloud −Needs address isolation and overlapping −Address isolation, Flexibility, low latency and high throughput −Overlay Mode Development & QA Environment −Production development, QA & Staging −Flexibility, high throughput but can tolerate higher latency −Bridged and Overlay Mode USE CASES

Confidential and Proprietary13 Support flexibility, low latency, high throughput and overlapping address space all at the same time Support both bridged and overlay networks VM’s on a hypervisor should be able to choose networks Need a consistent deployment pattern Configurable by automation tools (puppet, chef, salt etc) PROBLEM STATEMENT

Confidential and Proprietary14 TYPICAL VSWITCH br-int br-bond VM Ta VM Tb VM Tc br-tun Mgmt Interface Prod Interface VLAN 200 Overlay Traffic Bridged Traffic Bond Intf IP Interface  HYBRID VSWITCH Hypervisor

Confidential and Proprietary15 Create the neutron networks −Flat Network −neutron net-create bridged-flat --provider:network_type=flat --provider: physical_network= −neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 bridged-flat --gateway 10.x.x /23 --name bridged-flat-subnet -- enable_dhcp=False −VLAN Network −neutron net-create bridged-vlan --provider:network_type=vlan --provider: physical_network= --provider:segmentation_id= −neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 bridged-vlan 10.x.x /23 --name bridged-vlan-subnet CONFIGURATION OF HYBRID MODE

Confidential and Proprietary16 Neutron networks (contd.) −Overlay Network −neutron net-create overylay-net −neutron subnet-create --allocation-pool start=10.x.x.100, end=10.x.x.200 overlay-net --gateway 10.x.x /23 --name overlay-net-subnet On the compute node −Configure the bond −ovs-vsctl add-br br-bond0 −Configure the OVS −ovs-vsctl br-set-external-id br-bond0 bridgeid br-bond0 −ovs-vsctl set Bridge br-bond0 fail-mode=standalone −ovs-vsctl add-port br-bond0 eth0 eth1 CONTD.

Confidential and Proprietary17 To measure latency and throughput, we ran following tests Within a rack (L2 switching) −Bare metal to Bare metal −Bridged VM to Bridged VM −Tunneled VM to Tunneled VM Across racks (L3 switching) −Bare metal to Bare metal −Bridged VM to Bridged VM −tunneled VM to tunneled VM Across the Network Gateway −Bare metal to Bare metal (outside the cloud) −Bridged VM to Bare metal (outside the cloud) −tunneled VM to Bare metal (outside the cloud) PERFORMANCE DATA

Confidential and Proprietary18 Compute Hypervisors −2 sockets, 16 cores/socket 2.6GHz (32 Hyper Threaded) −2 x 10G ports (Intel PCIe) −RAM : 256GB −Disk: 4 x 600GB in RAID-10 −RHEL 6.4 running OVS VM −vCPUs: 2 −RAM: 8GB −Disk: 20GB −RHEL 6.4 HYPERVISOR, VM AND OS DETAILS

Confidential and Proprietary19 TEST SETUP X.X.X.X/23Y.Y.Y.Y/23 Half rack with Two Fault Zones L3 Gateways For Overlays X.X.X.X/23Y.Y.Y.Y/23X.X.X.X/23Y.Y.Y.Y/23

Confidential and Proprietary20 Tunneling VM uses STT (OVS) Bridged VM uses Flat Network (OVS) Used nttcp 1.47 for throughput Bi-directional TCP with varying buffer size Buffer size in bytes : [64,… 65536] MTU size : 1500 Bytes (on both bare metal and VM’s) Used ping for latency measurement (60 samples) Used python scripts and paramiko to run the tests Tests done with other traffic (Dev/QA) −Around 470+ active VM’s −Around 100 Hypervisors −Multiple half racks TESTING METHODOLOGY

Confidential and Proprietary21 TEST SETUP FOR SAME RACK

Confidential and Proprietary22 WITHIN A RACK (L2 SWITCHING) THROUGHPUT

Confidential and Proprietary23 WITHIN A RACK (L2 SWITCHING) PING LATENCY

Confidential and Proprietary24 Observations Results for buffer size < MTU size −Tunneled VM’s tend to have best overall throughput −Bridged VM’s tend to better than bare metal −OVS and tunnel optimizations at play Results for buffer size > MTU size −Tunneled VM’s and bare metal performance about the same −Bridged VM’s bests both bare-metal and tunneled VMs (??) OVS and tunnel optimizations apply for buffer sizes smaller than MTU OVS optimization apply for buffer sizes greater than MTU Tunneled and Bridged VM’s have a slightly higher latency than bare metal ANALYSIS

Confidential and Proprietary25 TEST SETUP ACROSS RACKS

Confidential and Proprietary26 ACROSS RACKS (L3 SWITCHING) THROUGHPUT

Confidential and Proprietary27 ACROSS R3ACKS (L SWITCHING) PING LATENCY

Confidential and Proprietary28 No bridged VM’s in the tests (setup problem) Results for buffer size < MTU size −tunneled VM’s tend to have best overall throughput −OVS and tunnel optimizations at play Results for buffer size > MTU size −tunneled VM’s and bare metal performance about the same OVS and tunnel optimizations apply for buffer sizes smaller than MTU tunneled and Bridged VM’s have a slightly higher latency than bare metal ANALYSIS

Confidential and Proprietary29 TEST SETUP ACROSS L3 GATEWAY

Confidential and Proprietary30 ACROSS NETWORK GATEWAY THROUGHPUT

Confidential and Proprietary31 ACROSS NETWORK GATEWAY PING LATENCY

Confidential and Proprietary32 tunneled VM’s tend to have similar if not better throughput as bare metal or bridged VM tunneled VM’s have a slightly higher latency Bridged VM’s tend to have same overall throughput as the hypervisor Bridged VM’s tend to have same latency as the hypervisor Latency from a tunneled VM across L3 gateway is higher than Physical VMs due to extra hops, but need to re-run the tests ANALYSIS

Confidential and Proprietary33 Understand your network requirements −Latency, bandwidth throughput, flexibility Overlay Vs Physical Hybrid Mode Performance Analysis Make your deployment patterns simple and repeatable Future work −Additional performance tests −VXLAN, NVGRE −Varying MTU size −Setup without background traffic Let me know if you are interested to collaborate CONCLUSION & FUTURE WORK

THANK YOU