Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD.

Similar presentations


Presentation on theme: "Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD."— Presentation transcript:

1 Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD

2

3 Course Goal Network Algorithms: Algorithms for key bottlenecks in a router (lookups, switching, QoS) How to attack new bottlenecks: Changing world -> measurement, security Algorithmics: Models and Principles. System thinking + Hardware + Algorithms

4 Different Backgrounds Balaji: Stochastic Processes, randomized algorithms, QCN, DCTCP, 2 yr at Cisco (Nuova) Tom: Former Cisco VP and Fellow. Architect of Cisco Catalyst and Nexus Me: Algorithmic background. DRR, IP lookups. 1 year at Cisco (NetSift) All worked on router bottlenecks. Want to reconcile our viewpoints. Fun for you

5 Course Grading 4 Homeworks: 20% –Posted Weds, due Thus by 2pm Course Project: 80% –Teams of 2 students –Ideally, original idea related to course leading to a paper; or, a detailed review of paper(s) on a topic. –1 page abstract due by Mon, Apr 18 th –Final project presentation + 5 page report Office hours: –Balaji: Tuesday 10-11 -- George Wednesday 10-11 TA: Mohammed, introduction

6 Course Outline Lectures 1 to 2: Principles and Models Lectures 3 to 5: Lookups Lectures 6 to 9: Switching Lectures 10 to 11: QoS Later lectures: measurement, security, data center switching fabrics, congestion control, etc

7 Plan for Rest of Lecture Part 1: Warm up example of algorithmic thinking Part 2: Ten Principles for Network Algorithms

8 Warm Up: Scenting an Attack Observation: Large frequency of uncommon characters within code buried in say URL. Goal: Flag such packets for further observation.

9 Strawman Solution Strawman: Increment character count and flag if any over threshold. Problem: Second pass over array after packet

10 Oliver Asks for Less Idea: Relax specification to alarm if count over threshold wrt current and not total length Problem: Corner cases for small lengths

11 Oliver uses Algorithmic Thinking Idea: Keep track of max relativized count and compare to length L at end. Problem: Still 2 Reads, 1 Write to Memory. 1 Read is free (parallelism, wide memories)

12 Relax Specification: Approximate! Idea: Round up thresholds to powers of 2 to use shifts. Have false positives anyway. Problem: Initialize counts per packet

13 Lazy initialization Idea: Use generation number and lazy count initialization. Problem: Generation number wrap (Scrub)

14 Part 2, Ten Principles for Network Algorithms

15 Summary P1: Relax Specification for efficiency P2: Utilize degrees of freedom P3: Shift Computation in Time P4: Avoid Waste seen in a holistic view P5: Add State for efficiency P6: Exploit hardware parallelism and memories. P7: Pass information (e.g., hints) in interfaces P8: Use finite universe methods (bucket-sort etc) P9: Use algorithmic thinking P10: Optimize the Expected Case

16 P1: Relax Specifications IntServ Spec: Serve smallest, requires sorting DRR: Throughput fair, modify round robin, O(1) 1500 50 800 Router Output Queue Scheduling

17 Forms of relaxing specification Probabilistic: use randomization –Ethernet, RED Approximate: – RED,TCP Round trip weights Shift computation in space: Path MTU R1 E 40001500 R2 Fragment? Calculate Min segment Size

18 P2: Utilize degrees of Freedom Have you used all information (Polya)? TCP: Reduce rate on drop; later, on ECN set DCTCP: Use number of ECN bits set in window R1 E 10 G1 G R2 Pass ECN Bit Calculate Send rate

19 P2 in IP Lookups Have you used all knobs under your control? Unibit Trie: One bit at a time P7 = 1000000*  5 Reads Multiibit Trie: Many bits, 2 Reads

20 P2 in Algorithms (keyword search) We hold these truths truths Last character first Boyer Moore

21 P3:Shift Computation in Time Precompution: slow update, fast lookup Lazy Evaluation: counter initialization Expense Sharing (batching): timing wheels Precomputed Prefix of 100010 P7 = 100000* P6 = 10000 *

22 P4: Avoid Waste When viewing system holistically e.g: packet copies Avoid waste 2 bits 3 bits

23 P5: Add State Those who do not learn from history... BIC, QCN: Remember past high rate Past high Start here?

24 P6: Leverage Hardware Exploit Parallelism (memory and processing) Leverage diff memories: SRAM, DRAM, EDRAM 00000001 00000101 00000011 Touched rarely Keep in Slow memory Touched often Keep in Fast memory Large set of Counters

25 P7: Pass information across layers Do not hide information Pass “hints” Add index to list Bypassing the Kernel in VIA

26 P8: Finite Universe Techniques Bit Maps: Saves space & time by HW parallelism Bucket sorting: skipping over empty elements? 11 12 14 13 Timer Queue

27 P8: Finite Universe Techniques Cost of skipping empty buckets shared with incrementing current time. (P2) 11 12 14 13 Timer wheel Current time = 10

28 P9: Use Algorithmic Ideas Not blindly reusing algorithms but using ideas Binary Search on Hash Tables: 10000000* 11100000* 01 *101* 12834 Start hereOr here?

29 P10: Optimize Expected Case Mostly worst case, sometimes expected case Optimization should not be caught by testing! DD D->M E Cache Router D, M Instead of maintaining separate HW threads for cached and uncached, drop uncached!

30 Summary P1: Relax Specification for efficiency P2: Utilize degrees of freedom P3: Shift Computation in Time P4: Avoid Waste seen in a holistic view P5: Add State for efficiency P6: Exploit hardware parallelism and memories. P7: Pass information (e.g., hints) in interfaces P8: Use finite universe methods (bucket-sort etc) P9: Use algorithmic thinking P10: Optimize the Expected Case

31 Summing up Course: router bottlenecks/network algorithms Not just a recipe book but a way of thinking Will see principles in action in later lectures Next lecture: hardware models by Tom


Download ppt "Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD."

Similar presentations


Ads by Google