Download presentation
Presentation is loading. Please wait.
Published byEgbert Hunter Modified over 8 years ago
1
VL2: A Scalable and Flexible Data Center Network Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel, Sudipta Sengupta Microsoft Research 1 SIGCOMM Comput. Commum. Rev., Vol 39, No. 4. (2009), pp.51-62
2
Outline Motivation Conventional Data Center Architecture Virtual Layer 2 (VL2) –Design –Advantage Evaluation Conclusion 2
3
Motivation The network is a bottleneck to Data Center computation. Today’s data center network has several issues: –Tree architecture –Congestion and computation hot spot –Traffic Independence –IP configuration Complexity –Migration –Tradeoff – Reliability and utilization 3
4
Conventional Data Center Architecture 4
5
Design of VL2 Location-specific IP address(LAs)(public) For all switches and interfaces or external server Application-specific IP addresses (AAs)(private) For application servers VL2 Directory System Stores the mapping of AAs to LAs Access control 5
6
Scale-out Topologies 6 Aggr : Int = n:m ToR : Aggr = 1:2
7
Scale-out Topologies Benefit Risk balancing –The failure of a Int. reduces the bisection bandwidth by only 1/m Routing is extremely simple on this topology –Random path up and random path down. 7
8
Example 8 Create by VL2 Agent Randomly select Int. by ECMP
9
VL2 agent –VL2 agent’s work flow Intercepts the ARP request for the destination AA Converts ① to an unicast query to the VL2 directory system Intercepts packets from the host Encapsulates the packet with the LA address from ②.
Caches the mapping from AA to LA addresses 9
10
Advantage of VL2 Load Balance Randomizing to Cope with Volatility: Building on proven networking technology: link-state routing, equal-cost multi-path(ECMP) forwarding, IP anycasting, IP multicasting Simple Migration –Static AAs –Only need to update AAs & LAs mapping Eliminating the ARP and DHCP scaling bottlenecks 10
11
Evaluation: testbed 80 servers 5 for directory system Intermediate switches*3 24 10Gbps Ethernet ports(3 for Aggr) Aggregation switches*3 24 10Gbps Ethernet ports(3 for Aggr, 3 for ToR) ToR*4 24 1Gbps ports 11
12
Experiment objective We test the following three objectives –Uniform high capacity –Fairness –Performance isolation 12
13
VL2 Provides Uniform High Capacity We create an all-to-all data shuffle traffic matrix involving 75 servers. –Each of 75 servers deliver 500MB data other 74 servers - a shuffle of 2.7 TB from memory to memory. –During the run, the sustained utilization of the core links in the Clos network is about 86%. (more than 10x what the network in our current data centers can achieve with the same investment) 13
14
VL2 Provides Fairness 14 – In the experiment before, we observe the 3 aggregate switch – We use Jain’s fairness index[15] [15 R.Jain. The Art of Computer Systems Performance analysis, techniques for Experimental Design, Measurement, Simulation, and Modeling. John Wiley & Sons, INC, 1991.
15
VL2 Provides Performance Isolation 15
16
Conclusion VL2 provides The simpler abstraction that all servers assigned to them are plugged into a single layer 2 switch Hotspot free performance A simple design that can be realized today High utilization Achieves high TCP fairness. 16
17
Q&A 17 I HAVE QUESTIONS!!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.