A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat Presented by Gregory Peaker and Tyler Maclean.

Slides:



Advertisements
Similar presentations
Data Center Fabrics Lecture 12 Aditya Akella.
Advertisements

PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
Jaringan Komputer Lanjut Packet Switching Network.
Internetworking Pertemuan 07 Matakuliah: H0484/Jaringan Komputer Tahun: 2007.
PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric. Presented by: Vinuthna Nalluri Shiva Srivastava.
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya, and Amin Vahdat Department.
PortLand: A Scalable Fault- Tolerant Layer 2 Data Center Network Fabric B 財金三 婁瀚升 1.
Data Center Fabrics. Forwarding Today Layer 3 approach: – Assign IP addresses to hosts hierarchically based on their directly connected switch. – Use.
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
Datacenter Network Topologies
William Stallings Data and Computer Communications 7 th Edition (Selected slides used for lectures at Bina Nusantara University) Internetworking.
1 Chapter 9 Computer Networks. 2 Chapter Topics OSI network layers Network Topology Media access control Addressing and routing Network hardware Network.
1 Pertemuan 21 Internetworking Matakuliah: H0174/Jaringan Komputer Tahun: 2006 Versi: 1/0.
1 Version 3 Module 8 Ethernet Switching. 2 Version 3 Ethernet Switching Ethernet is a shared media –One node can transmit data at a time More nodes increases.
TCP/IP Bridging, Switching and Routing in LANs Alvin Kwan.
PortLand Presented by Muhammad Sadeeq and Ling Su.
Data Center Network Topologies: FatTree
ProActive Routing In Scalable Data Centers with PARIS Joint work with Dushyant Arora + and Jennifer Rexford* + Arista Networks *Princeton University Theophilus.
A Scalable, Commodity Data Center Network Architecture Mohammad AI-Fares, Alexander Loukissas, Amin Vahdat Presented by Ye Tao Feb 6 th 2013.
A Scalable, Commodity Data Center Network Architecture
A Scalable, Commodity Data Center Network Architecture.
Storage area network and System area network (SAN)
Copyright 2003 CCNA 1 Chapter 6, part 2 Ethernet Switching By Your Name.
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
Switching, routing, and flow control in interconnection networks.
Practical TDMA for Datacenter Ethernet
ElasticTree: Saving Energy in Data Center Networks 許倫愷 2013/5/28.
Chapter 1: Hierarchical Network Design
Introduction to IT and Communications Technology Justin Champion C208 – 3292 Ethernet Switching CE
Before We Start How to read a research paper?
Common Devices Used In Computer Networks
VL2 – A Scalable & Flexible Data Center Network Authors: Greenberg et al Presenter: Syed M Irteza – LUMS CS678: 2 April 2013.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
A Scalable, Commodity Data Center Network Architecture Jingyang Zhu.
1 LAN design- Chapter 1 CCNA Exploration Semester 3 Modified by Profs. Ward and Cappellino.
Floodless in SEATTLE : A Scalable Ethernet ArchiTecTure for Large Enterprises. Changhoon Kim, Matthew Caesar and Jenifer Rexford. Princeton University.
By: Aleksandr Movsesyan Advisor: Hugh Smith. OSI Model.
A.SATHEESH Department of Software Engineering Periyar Maniammai University Tamil Nadu.
Hierarchical Network Design – a Review 1 RD-CSY3021.
OSI Model. Switches point to point bridges two types store & forward = entire frame received the decision made, and can handle frames with errors cut-through.
Department of Computer Science A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares Alexander Loukissas Amin Vahdat SIGCOMM’08 Reporter:
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
Network Topologies.
Hierarchies Ethernet Switches Must be Arranged in a Hierarchy –Root is the top-level Ethernet Switch Root.
Cisco Network Devices Chapter 6 powered by DJ 1. Chapter Objectives At the end of this Chapter you will be able to:  Identify and explain various Cisco.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 11, 2006 Session 23.
[Fat- tree]: A Scalable, Commodity Data Center Network Architecture
WAN Technologies. 2 Large Spans and Wide Area Networks MAN networks: Have not been commercially successful.
PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
Data Center Architectures
Network Layer COMPUTER NETWORKS Networking Standards (Network LAYER)
Data Center Networking
CIS 700-5: The Design and Implementation of Cloud Networks
Lecture 2: Cloud Computing
Data Center Network Topologies II
Data Center Network Architectures
Revisiting Ethernet: Plug-and-play made scalable and efficient
ElasticTree Michael Fruchtman.
NTHU CS5421 Cloud Computing
A Scalable, Commodity Data Center Network Architecture
Chapter 7 Backbone Network
NTHU CS5421 Cloud Computing
Storage area network and System area network (SAN)
Data Center Architectures
CMPE 252A : Computer Networks
In-network computation
Presentation transcript:

A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat Presented by Gregory Peaker and Tyler Maclean

Overview Structure and Properties of a Data Center Desired properties in a DC Architecture Fat tree based solution Evaluation of fat tree approach

Common data center topology

Problem With common DC topology Single point of failure Over subscription of links higher up in the topology – Typical over subscription is between 2.5:1 and 8:1 – Trade off between cost and provisioning

Properties of solutions Compatible with Ethernet and TCP/IP Cost effective – Low power consumption & heat emission – Cheap infrastructure – Commodity hardware Allows host communication at line speed – Over subscription of 1:1

Cost of maintaining switches

Review of Layer 2 & Layer 3 Layer 2 – Data Link Layer Ethernet MAC address – One spanning tree for entire network Prevents looping Ignores alternate paths Layer 3 – Transport Layer TCP/IP – Shortest path routing between source and destination – Best-effort delivery

FAT Tree based Solution Connect end-host together using a fat tree topology – Infrastructure consist of cheap devices Every port is the same speed – All devices can transmit at line speed if packets are distributed along existing paths – A k-port fat tree can support k 3 /4 hosts

Fat-Tree Topology

Problems with a vanilla Fat-tree Layer 3 will only use one of the existing equal cost paths Packet re-ordering occurs if layer 3 blindly takes advantage of path diversity – Creates overhead at host as TCP must order the packets

FAT-tree Modified Enforce special addressing scheme in DC – Allows host attached to same switch to route only through switch – Allows inter-pod traffic to stay within pod – unused.PodNumber.switchnumber.Endhost Use two level look-ups to distribute traffic and maintain packet ordering.

2 Level look-ups First level is prefix lookup – Used to route down the topology to endhost Second level is a suffix lookup – Used to route up towards core – Diffuses and spreads out traffic – Maintains packet ordering by using the same ports for the same endhost

Diffusion Optimizations Flow classification – Eliminates local congestion – Assign to traffic to ports on a per-flow basis instead of a per-host basis Flow scheduling – Eliminates global congestion – Prevent long lived flows from sharing the same links – Assign long lived flows to different links

Results: Heat & Power Consumption

Implementation NetFPGA: 4 Gigabit Ports, 36 Mb SRAM 64MB DDR2, 3GB SATA Port Implemented elements in Click Router Software Two Level Table Initialized with preconfigured information Flow Classifier Distributes output evenly across local ports Flow Report + Flow Schedule Communicates with central schedule

Evaluation Purpose: measure bisection bandwidth Fat-Tree: 10 machines connected to 48 port switch Hierarchical: 8 machines connected to 48 port switch

Results

Related Work Myrinet – popular for cluster based supercomputers Benefit: low latency Cost: proprietary, host responsible for load balancing Infiniband – used in high-performance computing environments Benefit: proven to scale and high bandwidth Cost: imposes its own layer 1-4 protocol Uses Fat Tree Many massively parallel computers such as Thinking Machines & SGI use fat-trees

Conclusion The Good: cost per gigabit, energy per gigabit is going down The Bad: Datacenters are growing faster than commodity Ethernet devices Our fat-tree solution Is better: technically infeasible 27k node cluster using 10 GigE, we do it in $690M Is faster: equal or faster bandwidth in tests Increases fault tolerance Is Cheaper: 20k hosts costs $37M for hierarchical and $8.67M for fat-tree (1 GigE) KO’s the competing data center’s