Download presentation
Presentation is loading. Please wait.
Published byKylee Binning Modified over 9 years ago
1
Load Balancing for Parallel Forwarding W. Shi, M.H. MacGregor, P. Gburzynski Department of Computing Science University of Alberta
2
Outline Introduction Parallel Forwarding Common Scheduling Schemes Previous Work Highest Random Weight and Robust Hashing Adaptive Load Sharing Hashing Cannot Balance Workload Scheduling Potent Flows
3
Outline (cont’d) Scheduling TCP Bursts TCP’s Bursty Transmit Pattern Avoiding Packet Reordering Load Balancer Design Simulation and Results Conclusions and Future Directions
4
Routing Tables are Growing
5
CIDR & Longest Prefix Match
6
Parallel Forwarding Generations of Network Forwarding Systems Software running on a CPU (80s) Special purpose h/w to offload CPU; switches replace buses. (mid 90s) Decentralized design with ASIC; one CPU per card (late 90s) Network Processors (NP or FE) [2,3] Flexible to accommodate changes and updates; fast to market; cost less … Optimized for key forwarding functions and high-speed I/O Parallel forwarding for performance and scalability
7
A Parallel Forwarding Model
8
Goals for the Scheduler Distribute load evenly to achieve highest possible throughput Preserve packet ordering within flows Maximize cache hit rate to obtain best processor performance
9
Common Scheduling Schemes Packet-level: Round Robin, Least Loaded, etc. Distribute load evenly Do not preserve packet order Disperse flows and thus reduce cache hit ratio Flow-level: Hashing (e.g., XOR, CRC, etc.) Hashes on the flow ID, typically, the five-tuple of {Source and Destination IP Addresses, Source and Destination Ports, Protocol ID} Preserves in-flow packet ordering Increases temporal locality seen by each FE May not distribute load evenly
10
Previous Work
11
HRW: Highest Random Weight [4] N Web caches serve requests at a Web site. obj: the name of the object in a request sid: cache server identifier Hash on {obj, sid} Use the server whose sid gives the largest hash value, i.e., the highest weight. HRW can achieve long-term load balancing
12
Robust Hash Routing [5] Goal: extend HRW to a system of heterogeneous servers Assign multipliers to cache servers to scale the hash return values and then choose the one with the highest weight Application: the cache array routing protocol (CARP)
13
Adaptive Load Sharing [6] Dynamically adjust multipliers based on the recent system load distribution. Application: multi-NP forwarding systems.
14
IP Destination Address Popularity
15
Zipf-like Function Fits Popularity Data
16
Popularity Characterization [7] Zipf-like distribution: P(R) ~ 1/R α (1) Slopes (α) fitted for the five traces, SDSC, FUNET, UofA, IPLS, and Auck4, are -0.905, -0.929, -1.04, -1.21, and -1.66, respectively. With α values less than -1, hashing cannot balance traffic, not even in the long term [8].
17
A Few Potent Flows RankFUNETUofAAuck4SDSCIPLS 18,233158,707640,7301,183,8342,788,273 27,42424,245440,149581,495944,253 32,97120,769196,513524,542919,088 42,47017,482194,757235,363808,773 52,29815,146186,095212,150732,339 61,61414,305177,388168,384582,367 71,38713,308135,286160,798570,316 81,31712,348135,033138,657510,043 91,30912,028132,812125,531473,562 101,25811,824104,716125,389470,072
18
Scheduling Potent Flows [8,9,10]
19
Flowlets for Traffic Engineering [11] Based on similar observations of the bursty nature of TCP traffic Developed independently in a different context Differs in many details and implications
20
Scheduling TCP Bursts
21
Ideal TCP Transmission
22
Actual (bursty) TCP transmission
23
Partially-filled Windows Result in Bursts
24
Eliminating Packet Reordering
25
N: # of FEs P i, P j, where j=i+1: two adjacent packets in a flow t i, t j arrival times of the two packets T i = t j – t i L: the buffer size of each FE ρ: the overall system utilization L i, L j : the numbers of packets preceding P i and P j in their respective queues.
26
Eliminating Packet Reordering Sufficient but not necessary: L i – T i * λ / ρ / N < L j λ is the bandwidth of the interface, in pps. T i > (L i – L j ) *ρ * N / λ.
27
Eliminating Packet Reordering conservative choice is T i > L * ρ / λ / N. T i > L * ρ / λ = L / μ (2) when P j arrives P i has already been processed and thus reordering is not possible.
28
Scheduling Bursts [11, 14]
29
Scheduling Bursts Example 10 Gbps 1.25 GBps 1.25 Mpps assuming an average packet size of 1000 bytes [12]. with 8 FEs, μ i = 156,250 pps. with L = 1000 pkts, T i >6.4ms this is much less than the minimum RTT observed in [13]. Even Better News L could be further reduced, relaxing the conditions for T i. The scheme scales and parallelization helps.
30
% Packets Dropped
31
% Packets Reordered
32
Conclusions Static hashing-only schemes cannot balance workload. Burst scheduling is relatively simple Burst scheduling distributes load evenly Reorder rates of < 0.1% can be attained Larger buffers aren’t necessarily better
33
Future Work Improve the burst-scheduling triggering policy for better temporal locality A better scheme is surely possible than that mandated by Eq. 2. We might schedule both potent flows and bursts within flows
34
References [1] Jeoff Huston, The BGP routing table, The Internet Protocol Journal (Cisco), vol. 4, no. 1, 2001. [2] Niraj Shah, Understanding network processors, M.S. thesis, U. C. Berkeley, September 2001. [3] Douglas Comer, Network processors: Programmable technology for building network systems, The Internet Protocol Journal (Cisco), vol. 7, no. 4, pp. 3--12, 2004. [4] David G. Thaler and Chinya V. Ravishankar, Using name-based mappings to increase hit rates, IEEE/ACM Transactions on Networking, vol. 6, no. 1, pp. 1--14, February 1998. [5] Keith W. Ross, Hash routing for collections of shared Web caches, IEEE Network, vol. 11, no. 7, pp. 37--44, Nov-Dec 1997. [6] Lukas Kencl and Jean-Yves Le Boudec, Adaptive load sharing for network processors, in IEEE INFOCOM 2002, New York, NY, USA, June 2002, pp. 545--554. [7] George K. Zipf, Human Behavior and the Principle of Least-Effort, Addison-Wesley, Cambridge, MA,1949. [8] Weiguang Shi, Mike H. MacGregor, and Pawel Gburzynski, Load balancing for parallel forwarding, IEEE/ACM Transactions on Networking, vol. 13, no. 4, 2005.
35
References (cont’d) [9] Ju-Yeon Jo, Yoohwan Kim, H. Jonathan Chao, and Frank Merat, Internet traffic load balancing using dynamic hashing with flow volumes,. in Internet Performance and Control of Network Systems III at SPIE ITCOM 2002, Boston, MA, USA, July 2002, pp. 154--165. [10] Anees Shaikh, Jennifer Rexford, and Kang G. Shin, Load-sensitive routing of long- lived IP flows, ACM SIGCOMM Computer Communication Review, vol. 29, no. 4, pp. 215.226, October 1999. [11] Shan Sinha, Srikanth Kandula, and Dina Katabi, Harnessing TCP's burstiness using owlet switching, in 3rd ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets), San Diego, CA, November 2004. [12] Craig Partridge, et al. A 50-gb/s IP router, IEEE/ACM Trans. Netw., vol. 6, no. 3, pp. 237--248, 1998. [13] Jay Aikat, Jasleen Kaur, F. Donelson Smith, and Kevin Jeffay, Variability in TCP round-trip times, in IMC '03, Miami Beach, FL, USA, 2003, pp. 279--284, ACM Press. [14] Weiguang Shi, Mike H. MacGregor, and Pawel Gburzynski, A scalable load balancer for forwarding internet traffic: Exploiting flow-level burstiness, in Symposium on Architectures for Networking and Communications Systems, Princeton, NJ, USA, October 2005.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.