Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Efficient Flow Cache algorithm with Improved Fairness in Software-Defined Data Center Networks Bu Sung Lee 1, Renuga Kanagavelu2 and Khin Mi Mi Aung2.

Similar presentations


Presentation on theme: "An Efficient Flow Cache algorithm with Improved Fairness in Software-Defined Data Center Networks Bu Sung Lee 1, Renuga Kanagavelu2 and Khin Mi Mi Aung2."— Presentation transcript:

1 An Efficient Flow Cache algorithm with Improved Fairness in Software-Defined Data Center Networks Bu Sung Lee 1, Renuga Kanagavelu2 and Khin Mi Mi Aung2 1Nanyang Technological University, Singapore 2 A-STAR (Agency for Science and Technology), Data Storage Institute, Singapore

2 Changing scene in DC Data Center size has grown to a scale that we never imagine (http://storageservers.wordpress.com/2013/07/17/facts-and-stats-of-worlds-largest-data-centers/ ) . Google: 900,000 servers across 13 data centers Amazon: 450,000 servers, in 7 locations Virtualisation. Changing Data Center Network traffic (North-South to East-West) Traffic Types : mice and elephant.

3 Constraints Openflow switches flow table can hold up to 1500 entries.
It is possible to increase TCAM entries, but it consumes lots of ASIC space, power and cost. Centralized controller

4 Limitations of 3-tier network architecture
Redundant paths are not used (due to STP) => Total bandwidth reduction issue Racks of servers Top of Rack Switches Aggregation Switches Core Switch Interface 1 Interface 2 MAC Addr: 62-FE-F A3 7C-BA-B2-B Forwarding table: Table size increases proportionally to the number of servers => Scalability issue Address Interface Time 62-FE-F A3 1 9:32 7C-BA-B2-B 2 9:47

5 Traffic types Elephant flow - 100KB of transfer over the last 5 seconds.

6 Technology used Flow cache organised into separate buckets for elephant and mice. Determine flow type by using 100 Mbytes in 5 second threshold. Used the vLAN priority code bit (PCB) to indicate. Uses dynamic index hashing. Cache replacement strategy Uses Least Recently Used (LRU)

7 Experimental set-up

8 Motivation Motivation Objective
To propose a differential flow cache framework that achieves fairness and efficient cache maintenance with fast lookup and reduced cache miss ratio Motivation Openflow switches flow table can hold up to 1500 entries(This is too small when compared to the number of flows arriving at the switch) It is possible to increase TCAM entries, but it consumes lots of ASIC space, power (about 15 Watt/1 Mbit) and cost(US$350 for a 1M-bit chip) . Centralized controller – Reduce the overload – Distributed framework The framework uses a hash-based placement and localized Least Recently Used (LRU)-based replacement mechanisms.

9 Architecture

10

11 Dynamic index Hashing When a packet arrives, the hash value for the packet is calculated from its flow identifying fields using a base hash function secure hash algorithm SHA-1 a 1-bit prefix is added with bit '0' used for mice flows and bit '1' for elephant flows Lookup for the matched entry in the chosen bucket is carried out using the full 160-bit hashed value If there is no match in step-3, a new entry is made in the bucket, subject to space availability

12 Bucket Expansion If there is space constraint, a new bucket will be created If a bucket overflows, the buckets size are doubled with the new index using one extra bit. As a result, the entries in an original bucket (say, with the index 0) will be distributed among two buckets (with index 00 and index 01).

13 Performance Evaluation
Comparison of cache hit Ratio our proposed LRU method performs better than OpenFlow wildcard-aware linear table replacement, Random replacement and FIFO replacement method

14 Performance Evaluation

15 Performance Evaluation
Look up Time

16 Performance Evaluation
The 5000 we need to shuffle data between the buckets. Bucket size is 1000.

17 Cache architecture DDR3 SDRAM 16bits Memory Memory Controller 64 bits (8Bytes) 4 Look-Aside Interface SHA - 1 Look up Update Drop Add entry Output Buffer Input Buffer Header Action SHA Value Using Modelsim, to 7 microsec for lookup Looking at 4K bucket size. Hashing is 32 bits. 1bit for mice/elephant, all the other header in openflow. Total memory is 32K bytes

18 Conclusions Simple and effective means to address the overload on the controller Fast lookup Reduced cache miss ratio with LRU We have developed a NVRAM version of the cache for plugging into switches. Openflow 1.3 has group tables in the TCAM.

19 Future work DC VM Placement strategy Power aware Network aware
Resilience Inter-domain Openflow Software defined everything


Download ppt "An Efficient Flow Cache algorithm with Improved Fairness in Software-Defined Data Center Networks Bu Sung Lee 1, Renuga Kanagavelu2 and Khin Mi Mi Aung2."

Similar presentations


Ads by Google